Skip to Main Content

MS - MUN 2018 2019: Post Truth Era

Model United Nations

Videos - Post Truth Era

When watching these videos, you may want to consider the prompts for further discussion. 

"While companies like Facebook, Twitter and Google promise they will take steps to reduce "fake news," Michael J. Casey and Oliver Luckett, authors of "The Social Organism," argue the first step is an overhaul of the companies' algorithm-based platforms to make them more transparent."
  • Drawing from this video and what you have learned in introductory biology (and other) courses, do you think the analogy of social media being this living organism is fitting? Epidemiologists and other medical experts talk about ways to contain a virus that is spreading. Do you think there’s a way to contain the spread of fake news? Personally, what can you do? What can your community do?
  • Luckett discusses the idea of training the body to recognize a pathogen as foreign and react appropriately. Using the mind/body analogy, what can we do to ‘train the brain’ to recognize fake news? You have likely heard your professors talk about “critical thinking skills.” What role can the development of these skills play in detecting fake news?
  • What are the moral and ethical ramifications of maintaining propriety over algorithms currently used by profit-generating companies like Facebook? Should these algorithms be “opened up” as Casey suggests? Is there a moral obligation to do so? Or is that placing too great a financial and legal burden on social media companies?
  • Casey cites an example from Riot Games in which an “immunotherapy approach” is used to define and delineate hate speech, in order to reduce its occurrence. Do you think this was a fair way to go about reducing hate speech? What about freedom of speech? What new doors might this so-called immunotherapy approach open in the discussion of the line between hate speech, and freedom of speech?
  • Toward the end, Luckett says, “Evolution is not always progress.”
    Discuss! :-)
"Ryan Holiday, the author of, "Trust Me, I'm Lying," shares a bit about how he has manipulated media to get bogus, anonymous stories to the front-page of news media outlets."
  • Holiday mentions that some blog sites have a “low threshold” for what they will and will not publish. What do you think might define a “low threshold”? How would you describe a blog (or other news source) that, in contrast, has a higher threshold? What might that higher threshold entail?
  • Holiday mentions how he has sent news media outlets “fake anonymous emails” and then watched as the resulting story spread and grew in strength. What does this say about the inherent danger of the so-called ‘anonymous tip’? Should news outlets have a blanket policy of refusing to act on such tips? What effect could that have on the flow and sharing of information in a democratized society?
  • Toward the very end, the narrator asks two very good questions, and Holiday gives his answer. What do you think about his answer?
  • "If you’re not paying for it, you’re not the customer, you’re the product.”

    Discuss! :-)

"Trevor Noah, host of the Daily Show, has told BBC Hardtalk’s Zeinab Badawi that factual accuracy is the base of his best jokes."
  • Noah says, “The best jokes are based in truth.” Drawing from what you have learned in introductory psychology or sociology (and other) courses, would you agree or disagree?
  • Do you think it’s possible, or advisable, to operate in a space that, as Noah explains, “…is “completely neutral, devoid of all opinion, and giving everybody an equal platform to share their views”? 
"Syria still in turmoil but the other political realities turned upside down amid much talk of fake news -- and post-truth politics. In first of a series looking at how the world changed in 2016 here's our special correspondent Allan Little."
  • In the opening, the narrator describes a “…shared public reality, within which they [readers] can disagree, dispute and challenge each other.” What do you think defines a “shared public reality” today? What is (or should be) the news media’s role in shaping that reality? What is (or should be) the role of various social media platforms in shaping that reality?
  • How might the aforementioned “shared public reality” conflict with what they narrator goes on to describe as “two parallel public realities”? How do you think this split occurred?  
  • “Cognitive bias” is, and has been, a buzzword for a long time. One form of cognitive bias is Confirmation Bias. Drawing from this video and what you have learned in introductory psychology or sociology (and other) courses, what seems to be the relationship between Cognitive Bias and fake news? Have you ever struggled with Cognitive Bias? What are some ways we can combat the effects of Cognitive Bias when consuming news?
  • What is (or should be) journalism’s role in a democracy?
  • Toward the end, the narrator poses the question, “Who in the new media landscape is to police what’s valid and what’s fake, what’s true and what’s post-true?”
    Discuss! :-)
"Washington Post reporter Wesley Lowery says the social media giant isn't excused from making responsible editorial choices just because it wishes to see itself as a technology company first. Lowery's book is "They Can't Kill Us All: Ferguson, Baltimore, and a New Era in America's Racial Justice Movement"."
  • If Facebook has the ability to stop fake news from being spread on their website, why do you think they had not exercised that ability, at least the time this video was made?
  • Lowery mentions an “editorial infrastructure” that helps to maintain a check-and-balance of what’s true and what’s not. Should large social media platforms like Facebook maintain such an infrastructure? Should it be their responsibility? Or are these platforms there simply to provide a forum of communication for its users, however chaotic that forum may be?
  • Lowery says, “As soon as they [Facebook] begin playing that role at all, they now take on, I believe, a responsibility to curate this content.” Do you agree or disagree? Why? If you agree, how do you think Facebook could accomplish this? What steps can they take? If you disagree, whose responsibility, if anyone’s, should it be to curate the content that propagates via Facebook?
  • Towards the end, Lowery states, “When you choose to publish something on your platform…you are making an editorial decision to allow it to exist in a space.” How does this statement fit in with what Trevor Noah talks about in the BBC interview video?      
"This teen says the secret to creating viral hoaxes is to tell people what they want to hear — and to throw in a little Justin Trudeau."
"It's nothing new, and it didn't swing the election. "
"Do children's digital fluency allow them to distinguish between fake news and real news online? WSJ's Sue Shellenbarger has surprising results of a study of nearly 8,000 students (from grammar school through college) that tested their ability to tell news from ads and to discern websites from hate groups and mainstream professional organizations."
  • Have you ever read a story on social media that scared or shocked you? How did you react? Reflect back on that experience. Does what you read still scare you, or did you learn later on that it wasn’t anything to worry about? If your feelings about that story changed, how and why do you think they changed?
  • Shellenbarger cites a statistic from Media Insight Project that states that by the age of 18, 88% of young adults are getting their news from Facebook. Are you one of those 88%? Be honest! What are some other, more credible and reputable news sources that you can consult to double-check a sensational story on Facebook, or other social media platform? (Not sure? Check out this link: http://libguides.columbiasc.edu/fakenews/evaluating.)
  • What steps does Shellenbarger recommend that parents take to teach their children to critically evaluate sources of news? Could these recommendations be good everyone? If you could tweak or adjust her recommendations for you and your peer-group, what adjustments, if any, would you make? 
"Twitter, Facebook and Google are taking steps to reduce fake news, misinformation, and harassment on the internet after users expressed concerns that false news stories and hate speech fueled divisiveness in the recent presidential election campaign."
  • Zuckerberg defended Facebook by saying, “Less than 1% of the site’s [Facebook’s] worldwide content could be classified as fake. Overall, this makes it extremely unlikely hoaxes changed the outcome of this election in one direction or the other.” It could be argued that Zuckerberg is taking a strictly quantitative approach to the problem, stressing how small the percentage actually is. But what about a qualitative approach? Considering how influential Facebook is, could that 1% of fake content have been more influential than he is giving it credit for? Or do you agree that 1% is too small a number to worry about?
  • What separates “hate speech” from “free speech”? Do you think it’s possible for computer software, such as AI (Artificial Intelligence) or an algorithm, can truly and accurately detect hate speech and misinformation? Or is that ultimately a task that will (or should?) fall to real people?   
  • Do you agree with Google’s stance to ban fake news websites from using Google’s ad selling system, which will likely hurt their revenue? It could be argued that these websites are, like any other company, just trying to make money. What if you were an employee of one such website, with a family to support? Would you want to see your employer’s revenue take a hit? 
"Major corporate brands are, often inadvertently, placing the ads that fund the growing number of web sites that peddle fake news online. WSJ's Lee Hawkins explains."
  • Much of what Hawkins is reporting on is beyond the control of the average person. But what are some steps and actions that you can take to educate yourself and others about fake news and hate speech?
  • Why do you think the invention of a “truth filter technology” has eluded modern society thus far? What makes this such a challenging task? Do you think there will ever be such a thing? If you were put in charge of a Truth Filter Technology Task Force, how would you go about it? What actions would this filter take to identify untrue information? Once identified, should your filter then recommend corrective or punitive measures?
"Facebook says its artificial intelligence know-how could eventually be a key to stamping out the fake news that critics say has infused the social media network. WSJ's Lee Hawkins explains."
  • Scenario: You graduate with honors from Columbia College (Go you!!) and are immediately hired by a social media Artificial Intelligence company as a researcher. Your first task is to come up with an answer to the following questions: “What’s the trade-off between filtering and censorship?” and “How can filtering technology be introduced in a responsible way?” Discuss! :-) 

(Thank you to - URL: https://libguides.columbiasc.edu/fakenews)

"Hillary Clinton calls fake news an "epidemic" that is putting lives at risk."
  • Have you ever known anyone who has been the subject of a fake news story or rumor? How did it impact that individual? Were their friends and family affected? Were they able to ‘set the story straight’? If so, how did they go about it and how long did it take?
  • What are some other, more credible and reputable news sources that you can consult to double-check a sensational story on Facebook, or other social media platform? (Not sure? Check out this link: http://libguides.columbiasc.edu/fakenews/evaluating.)
Zurich International School
Steinarcherstrasse 140
8820 Wädenswil, Switzerland
+41 (0) 58 750 2500