Camille Mathiem, Jet Propulsion Lab:
Presentation available at http://conferences.infotoday.com/documents/259/A105_Mathieu.pptx
Hard to get money, prove ROI
Architecture: no enterprise is the same as another
“Why can’t it work like Google?”
They use Elasticsearch.
5,000 knowledge workers
150 content silos (that they know of)
Nice display with facets.
At first there were some unacceptable results: failed 50% of the time to provide relevant results in top 5 results.
Librarians were able to make searches more relevant by looking at top 100 searches and tweaking results. Also, deleting irrelevant pages.
Poor metadata, missing or generic titles (such as “slide 1”) was fixed.
However, band-aid solutions are not sustainable solutions. Just looking at the top 100 searches misses all the other searches.
Responsive communication with users (who are often content creators as well) is the greatest asset for improving search.
Respect the link between DAM and search.
Most effective solutions may lie in curating content.
User-search interaction: anticipatory design, social tagging
Content or repository tagging: consistent metadata
Enterprise search needs a librarian’s respect for curation and metadata.
Sarah Dahlen and Kathlene Hanson, CSU Monterey Bay
Presentation here: http://conferences.infotoday.com/documents/259/A105_Dahlen.pdf
Wanted to find if abstracting and indexing databases provided added value with a discovery tool.
Used 50 students and tested 3 search tools:
Social Sciences Abstracts
Results split evenly, but that means 2/3 preferred one of the discovery layers.
Further areas to explore:
Student use of controlled vocabulary, metadata, search facets.
Library search tools’ use of: subject/discipline scoping, relevance ranking
Question re. missing metadata in documents uploaded to a repository:
JPL: Working on metadata standards.
JPL: Using Sharepoint and Docudata (?)