Modeling Real Search Skills in Action #il2013 #internetlibrarian @researchwell

[One last session from Internet Librarian that I haven’t had a chance to blog about until now.]

Tasha Bergson-Michelson, formerly Google search educator, now school librarian at Castilleja School
Julie Erickson, South Dakota State Library

Started by asking attendees to jot down some ideas for searching on the topic of the effect of sleep deprivation on the mental processing speed of students.

Here are some of the facets I wrote down:

  • Sleep: lack, hours
  • insomnia
  • Sleep deprivation
  • processing speed
  • mental process*
  • students
  • teenagers
  • adolescents

The idea of this session is not only to do a good search, but to show your users/students what you are doing, so they can apply the techniques themselves.

Reference interview: Ask “What did you mean by that term?” (In the example above, we’d want to know what age students we are talking about.)

A good question: What would the perfect book title on this topic be?

Try adding the word “scholar” in Google to get more academic results.

When you make a mistake, say, “How fascinating” as you try to figure out and explain why the search results came back the way they did. Every failed search teaches me something.

Real-life example: Tasha did a search on “Internet models,” which, it turns out, does not bring up models of how the Internet works. (NSFW)

Less is more: don’t tell everything you know.

Think before typing in a searchbox: who cares about this thing? That will give you a clue to who might be providing information about it online.

Many people don’t know about:

CTRL-F to find keywords on a web page.
Using “-” to exclude terms in a search (boolean Not)
Searching on filetype: and color:

Even when people learn about a new technique, they may not realize it’s transferable. People may learn they can do site:census.gov to limit their results to the Census web site, but not realize they can also do site:bls.gov or even site:.gov.

One good article can lead to more. Try a subject encyclopedia to get started. A New York Times article can lead to more scholarly research.

The librarian keeps you engaged by talking to you.

Advertisements

Building Google’s Power-Searching MOOCs #il2013 #internetlibrarian

Tasha Bergson-Michelson

Librarian at Castilleja School, formerly Google search educator

Just because it looks magic doesn’t mean you can’t get better at it.

Google thought it should do a MOOC. Has tools: Youtube, Docs, etc. Can also handle 10 million people at once.

Six hours of content. Wanted to reach a broad audience. Multiple choice/fill in the blank. Semi-synchronous.

Never put a midterm in the middle. Could take it as many times as you like, but had to finish by a certain date. Lots of people complained about that. “Apparently a deadline is not as firm an idea as I thought.”

Five-minute videos plus activity. Offered a text alernative for thsoe with different learning styles.

Videos are hard to edit and it’s hard to get everything in. In the text version, they could include more info.

When writing for 155,000 people, someone will hate every question.

Problem: Google learns from people’s bad queries, so sometimes that would cause the bad query to work for the next person.

People might learn they can search on site:bls.gov and not realize they could do the same thing for census.gov.

People from 196 countries and territories. Questions were too ethnocentric.

Question about whether the word “evolution” occurs in the Google Books copy of On the Origin of Species. Answer differs depending on the edition you search.

Improved each time and never had the same complaint twice.

People learn by watching over someone’s shoulder. How could they emulate that in the MOOC?

People have different ways of doing things.

Made 12 challenges. Didn’t have to do them and didn’t have to do them in numerical order. They required multiple steps and could be solved in multiple ways. Example: identify a feather found on the ground at the Rio Platano Biosphere Reserve.

Had to be a right-or-wrong answer or people freaked out. But then you outline your steps, and students could read each other’s answers.

Did Google Hangouts to talk about the challenges.

www.powersearchingwithgoogle.com/

Did Advanced Power Searching class.

Text usage was about 50% that of video usage. But it varied by topic.

Fish pedicure / worm therapy divide. Yarnbobmbing was a popular topic. If you don’t know what thousands of people like, go for weird.

Didn’t help students with final challenges. They helped each other.

Start with outcomes:
What do you want students to know? Work backwards from there.

A list of technical skills does not equal competencies.

Format and contents must grow out of objectives.

Create a “big-idea” narrative. Key critical thinking skills. Overarching themes. Tie themes back to actionable skills. Such as, not usually just one way to do things.

Align desired content, user needs, and design constraints. How to talk about these things without delving into library science terminology.

Color filtering: If you search for Bach pictures that are white, you get sheet music.

Tesla: different colors for car vs. person.

Soccer players running around: use green.

This gets people’s attention and makes them listen, not the library science theory.

Test and test again:
Groups can be small. Doesn’t have to hugely formal. Prioritize fixes, fix, and test again.

Some loved the advanced format, some hated it.

Some people spent a lot of time on it, some couldn’t.

Connect with students. Use social media to create a community.

New State of Search #IL2013

Greg Notess, long-time writer on web search.

Google:

Moving into semantic, structured web, rather than straight keyword searching. They have years of data about the information of the web and how people search.

Predictive search: here’s what you want to know.

Hardware.

Results page: web, news. Right column of facts: “knowledge graph.” This is the semantic, structured web, which Sir Tim Berners-Lee was calling for in 2001. Software would be able to read it.

The web didn’t have metadata (i.e., cataloging) that librarians want. For example, consistent ways to specify author and date of a web page.

Schema.org, which is supported by Google, Bing, Yahoo, and Yandex, is working on this. Info is embedded in web pages with HTML attributes (e.g., itemscope, itemtype, itemprop).

Google + author profiles. Again, added to page markup.

Knowledge Graph (Bing and Facebook doing similar things): addresses, reviews, carousel of pictures on top. About real world people, places, and things. More likely to come up with simple or popular topics. Much info from Wikipedia, Freebase, Google search data, etc.

Sometimes results are outdated: death dates, marital status. Sometimes based on bad data from Wikipedia.

Results based on user’s location and previous searches.

Bing Snapshot: similar to Google Knowledge Graph. Carousel, social networking links.

Google’s hardware hopes:

Google Glass. Not many people here want to try them. However, some reviewers started out skeptical, but then loved them. Uses input from what you look at and what you say, rather than what you type.

Rumored smart watches.

Chromebook tablet. (Other: Windows tablet, iPad Air)

Discovery and Libraries:

Discovery systems use one big database (as opposed to federated searching) for faster results.

Lots of different content, blended results, faceted options.

Results and opinions differ. Vendors and early adopters are very enthusiastic, but that’s how it was with federated search, too.

Next generation ILSes may have discovery built in.

We’d like to be more Google-like, but Google has 46,000 employees and $57 billion in revenue.

Lots of library systems now have facets on the left. Google had that, too, but then got rid of them. (Options now in a small row at the top.) Does this mean library systems will go that way, too? OTOH, shopping sites (even Google Shopping) have facets.

Google often uses keywords in search to display different types of content (for example, “Monterey pictures”). If you search more scholarly terms, Google knows to list scholarly articles. Will library systems be able to do this sort of thing?

Device diversity:

Desktops, laptops, tablets, smart phones, glasses, smart watches.

Desktop search: cached version available, but not in mobile Google app.

iPad: Knowledge Graph shows at the top.

Wikipedia’s mobile app: minimized for phone display. External data links are hidden until you open them. View history doesn’t show up in mobile version. Also “cite this page.”

Google Goggles: searches based on a picture, such as landmarks or barcodes. Only on mobile.