How Search Drives Collaboration Adoption
When I am organizing the monthly #CollabTalk TweetJams and putting together ideas for the 7 questions that we cover (after years of experimenting, 7 questions seemed to be the right pace for a one-hour event), I often include broad topics that allow varying backgrounds and opinions to jump in and add value. People may interpret a single question in a number of different ways, and I almost always walk away from these events with a new perspective or idea than when it started. And honestly, many of the questions posed during a tweetjam could be a topic all on their own.
Case in point was Question 6 from this month’s tweetjam on “Managing the Microsoft 365 Content Lifecycle“:
“Why do companies struggle with user adoption, and what are the primary barriers to implementing a successful adoption strategy?”
I talk and write fairly often on the topic of adoption. The question can go in many directions. In my years of working with collaboration and information management technology, one of the most important factors that I have found to improve adoption is search — and the discovery user experience.
Getting your organization to use the platform which you spent so much time and effort to deploy is as much about building buzz, running contests and leaderboards (gamification), and constantly educating your users as it is about building and deploying the platform. Yes, ultimately, you need to deliver the features and business value — but even then, you’re going to need to employ some degree of salesmanship. No matter how solid the solution you deploy, adoption takes work.
So how do you get them to not only use the platform, but to stay engaged?
A cool splash screen? A dancing kitty GIF? Threats? Bribery?
The best way to get users to stay is to make it effortless for them to get what they need. I’ve often written about change management and governance, which are important in the ongoing support of any healthy collaboration environment. But search is fundamental to adoption. Give me what I need, when I need it, and where I need it.
Search has always been more of an art than a science. A few years back, I took a workshop with my friend and search expert Jeff Fried (@jefffried) to learn more about the technology behind search. While I took copious notes and understood much of the content shared, I quickly realized that a search expert I am not. However, the concepts are fairly straightforward, and worth sharing: crawl, index, query, rinse and repeat. The challenge is getting the terms right.
The crawl component goes through the all the content sources (web sites, file shares, SharePoint, profiles, etc.) and temporarily stores all of the information in the temporary crawl database. The crawl database contains detailed information about items that have been crawled such as last crawl time, crawl ID, and type of update (full or incremental). If there is a large amount of content to crawl through, you can simply add more crawl components to do the work.
Once the items have been crawled, content processing takes the crawled information and feeds it into an index. The index parses the documents, and then transforms the crawled content into indexed content. There is usually also a separate link database that writes the information about the links and URLs associated with the information.
The analytics processing component has two separate components itself. Search analytics goes through the crawled items to find activity information such as links, related people, and metadata. Usage analytics contains information like views on an item from a front-end event store. The processing component then returns the information to the content processing component to include them in the search index. Default usage events are things like views, recommendations displayed and recommendations clicked. Results from usage analytics are then stored in the analytics reporting database. Along with the new analytics processing comes some new default reports such as popularity trends and most popular items.
There are also separate index components and partitions. Each index component takes in the information from the content processing component. Queries are then sent to the index replicas from the query processing component.
The last piece of this puzzle is the query processing component. Within SharePoint, at least, this little guy sits between the search front-end and the index component. The purpose of the query processing component is to analyze and process the search queries and results. Once done, it submits the query to the index component and then returns it to the front-end.
All in all, it’s really not as complicated as it sounds. If your search is going slower that you’d like, add more horsepower. For faster crawl times, add more crawl databases and content processing components. If the results are taking too long to be returned, replicate the partition. Or in the case of a larger farm, split the index into more partitions. To increase the availability of query, create an extra query processing component on different application servers.
So why go to all the trouble of learning how all of this works? Because search is the backbone of collaboration and, as I mentioned up top, a key ingredient in making your environment perform under the scrutiny of discriminating end users. How do we get adoption to where we want it? Configure a fast, accurate, efficient search so your users can find what they’re looking for.