The “Always Check” Approach to Online Literacy

I just read this article with useful tips on how to authenticate your newsfeeds. And that you should check everything, even material you are sure is correct. Ever think about it? When you spread news, share it on social media, or mention it? How can you be sure the information is correct? Who do you trust? In Norway we have just had a debate on the skills of our 15-year-olds when it comes to searching the web, getting the right information, checking if it is accurate. Digital competency is important. How digital competent are you as a teacher? A responsible adult and citizen?  I am sharing the article in full here. If you prefer to read the original article click here.

One of the things I’ve been trying to convince people for the past year and a half is that the only viable literacy solution to web misinformation involves always checking any information in your stream that you find interesting, emotion-producing, or shareable. It’s not enough to check the stuff that is suspicious: if you apply your investigations selectively, you’ve already lost the battle.

Once you accept that, certain things become clear. Your methods of checking have to be really quick. They have to be habitual, automatic. They can’t be cognitively expensive. And those who teach media literacy have to be conscious of this trade-off between depth and efficacy and act accordingly.

There are some hard problems with misinformation on the web. But for the average user, a lot of what goes wrong comes down to failure to follow simple and quick processes of verification and contextualization. Not after you start thinking, but before you do.

I can’t get these processes down to a two-second mirror-and-head-check, but I can get them close. What follows are some of the methods we teach students in our work. It will seem like there is a lot of stuff to learn here, but you’ll notice that it comes down to the same strategies repeated in different contexts. This repetition is a feature, not a bug.

Is This the Right Site?

Today’s news reveals that Russian-connected entities were trying to spoof sites like the Hudson Institute for possible spear-phishing campaigns. How do we know if the Hudson Institute site we are on is really the real site? Here’s our check:

Hudson

The steps:

  • Go up to the “omnibar”
  • Strip off everything after the domain name, type wikipedia and press enter
  • This generates a Google search for that URL with the Wikipedia page at the top
  • Click that link, then check in the sidebar that the URL matches.
  • Forty-nine out of fifty times it will. The fiftieth time you may have some work to do.

In this case, the URL does match. What does this look like if the site is fake? Here’s an example. A while back a site at bloomberg.ma impersonated the Bloomberg News site. Let’s see what that would look like:

Bloomberg

You do the same steps. In this case Bloomberg News is not the top result, but you scroll down and click the Bloomberg News link, and check the URL and find it is different. If you’re lazy (which I am) you might click that link to get to the real site.

What Is the Nature of This Site?

How about this site, and its searing commentary on Antifa and journalists?

antifascist.PNG

Maybe you agree with this article. I don’t, but maybe you do. And that’s okay. But do you want to share from this particular site to your friends and family and co-workers? Let’s take a look!

webamren

You can dig into this if you want, aily to a site that has published such sentences as “When blacks are left entirely to their own devices, Western civilization — any kind of civilization — disappears” is not ethical — or likely to put you in the best light.

Is This Breaking News Correct?

Here’s some breaking news.

breaking

More people than you would believe that the blue checkmark = trustworthy. But all the blue checkmark really does is say that the person is who they say they are, that they are the person of that name and not an imposter.

Your two-second “mirror and head-check” here is going to be to always, always hover, and see what they are verified for. In this case, the verification means something: this person works for CNBC.com, a legitimate news site, and she covers a relevant beat here (the White House):

Verified

But maybe you don’t know CNBC, or maybe you see this news from someone not verified, or verified but not as a reporter. How will you know whether to share this? Because you know you’re DYING to share it and you can’t wait much longer

Use our “check for other coverage” technique:

Manafort

When a story is truly breaking, this is what it looks like. Our technique here is simple.

  • Select some relevant text.
  • Right-click or Cmd-click to search Google
  • When you get to Google don’t stop, click the “News” tab to get a more curated feed
  • Read and scan. Investigate more as necessary.

Scan the stories. If you want to be hypervigilant, scan for sources you recognize, and consider sharing one of the stories featuring original reporting instead of the tweet.

I’m going to state this again, but if you look at that loop above you’ll see this is about a seven second operation. You can absolutely do this every time before you share. And given it is so easy, it’s irresponsible not to. I’m not going to tell you you are a bad person if you don’t do these checks, but I think in your heart you already know.

Teach This Stuff First Already

Maybe you think you do this, or you can really “recognize” what’s fake by looking at it. I am here to tell you that statistically it’s far more likely you’re fooling yourself.

If you’re a human being reading this on the internet and if you’re not a time traveler from some future, better world, there is less than a one in a hundred chance you do the sort of checks we’re showing regularly. And if you do do this regularly — and not just for the stuff that feels fishy — then my guesstimate is you’re about two to three standard devs out from the mean.

Now imagine a world where checking your mirrors before switching lanes was rare, three standard-deviations-out behavior. What would the roads look like?

Well, it’d probably look like the Mad Max-like smoking heap of collisions, car fires, and carnage that is our modern web.

I get worried sometimes that I am going to become too identified with these “tricks”. I mean, I have a rich history of teaching students digital literacies that predates this work. I’ve been doing the broader work intensively for ten years. (Here’s a short rant of mine from 2009 talking about web literacy pedagogy.) I’ve read voraciously on these subjects and can talk about anything from digital redlining to polarization models to the illusory truth effect. I’m working on a project that looks to document the history of newspapers on Wikipedia. I worked on wiki with Ward Cunningham. I ran my first “students publish on the web” project in 1997.

But I end up coming back to this simple stuff because I can’t shake the feeling that digital literacy needs to start with the mirror and head-checks before it gets to automotive repair or controlled skids. Because it is these simple behaviors, applied as habitand enforced as norms, that have the power to change the web as we know it, to break our cycle of reaction and recognition, and ultimately to get even our deeper investigations off to a better start.

I have underlying principles I can detail, domain knowledge I think is important, issues around identity and intervention we can talk about. Deeper strategies for the advanced. Tips to prevent a fragility of process. Thoughts about the relationship between critical thinking and cynicism.

But for the love of God, let’s start with the head check. Source: Hapgood.

I would love to hear from you