This document outlines the design proposal for integrating GNQA into the Global Search feature.
When the GN2 Global Search page loads: 1. A request is initiated via HTMX to the GNQA search page with the search query. 2. Based on the results, a page or subsection is rendered, displaying the query and the answer, and providing links to references.
For more details on the UI design, refer to the pull request:
The API handles requests to the Fahamu API and manages result caching. Once a request to the Fahamu API is successful, the results are cached using SQLite for future queries. Additionally, a separate API is provided to query cached results.
For caching, we will use SQLite3 since it is already implemented for search history. Based on our study, this approach will require minimal space:
*Statistical Estimation:* We calculated that this caching solution would require approximately 79MB annually for an estimated 20 users, each querying the system 5 times a day.
Why average request size per user and how we determined this? The average request size was an upper bound calculation for documents returned from the Fahamu API.
why we're assuming 20 users making 5 requests per day?
We’re assuming 20 users making 5 requests per day to estimate typical usage of GN2 services
We can choose to either pass the entire query from the user to Fahamu or parse the query to search for keywords.
It is possible to generate potential questions based on the user's search and render those to Fahamu. Fahamu would then return possible related queries.
From the latest Fahamu API docs, they have implemented a way to include subquestions by setting `amplify=True` for the POST request. We also have our own implementation for parsing text to extract questions.