This creates a basic search component stub with a simple function that does nothing except raise an error.
Full functionality will come in later commits
This feature relies on an archived repo which hasn't had a commit in
over four years. It's also been noted that the `resource-usage`
artifacts aren't providing much value.
The suggestion was to simply remove these tasks instead.
- Split the `upload_android_symbols.sh` into a generation script and an
upload script.
- `module-build` tasks that have `uploadSymbols` set run the generation script.
- Added the `upload-symbols` task that to run the upload script. This
runs during the `ship` phase, the nightly cron job, and also for `[ci full]`
(the latter uses dummy credentials).
Firefox 125 was the last version to use the
mozilla-mobile/firefox-android repo. Now it is in mozilla-central, so
we can use the mozilla/gecko-dev mirror. Unfortunately, gecko-dev does
not have tagged releases, so we cannot look up manifests by their
version and have to use --ref instead.
Additionally, we were looking for Fenix manifests in the
mozilla-mobile/fenix repo for version < 110. However, v110 was released
from the fenix repo and v111 was the first version to use the combnined
mozilla-mobile/firefox-android repo. This has been fixed.
The issue is that once a record is deleted, it no longer has a type
field. Reworked the ingestion client to deal with this. It now always
fetches all records, then filters them client side. Client-side
filtering is more flexible and allows us to handle the deleted records.
Updated the code to use caching make the requests more efficient.
Rather than store last modified times for record types, store details on
each record that we've ingested. This means that if we fail to process
a record for any reason, on the next ingest we have another chance to
process it and recover.
Updated the `rs::Record` type to store the parsed `SuggestRecord`. This
simplifies a lot of the filtering/caching code.
Added support for ingesting these fields and using them to order the
fakespot suggestions. The fakespot scoring was getting complicated
enough that I split it out into its own module.
Also, changed the Fakespot base score to be 0.31. After further
discussed it was decided that Fakespot suggestions should have higher
priority than AMP ones.
Defined metrics for suggest. I used labeled timing distributions, which
is a relatively new metric that seems great for our needs. Using these
metrics, we can track a separate timing distribution for each record
type / provider.
Updated `ingest` to return metrics. Added `query_with_metrics`, which
returns metrics alongside the usual results. Added some classes to
record metrics as the operations execute.
Moved the top-level `fetch_suggestions` from db.rs to store.rs. It's
now responsible for metrics, so I thought it fit better in store.
Added a `get_cached_records` method that gets all record for a
collection, using a cache make it efficient. I'm hoping that this will
help out Nish who is experimenting with this in iOS and also that we can
use something like this for remote settings.
This is prep-work for
https://bugzilla.mozilla.org/show_bug.cgi?id=1908802.
Update the Suggest CLI to log ingestion details so that I can test that
the new code works correctly. Added support for re-ingesting
suggestions and ingesting specific providers.
Map suggestion providers to a single record type, rather than multiple
ones and always ingest icons/global config. I think this system is
simpler and also having to list icons/global config for each provider
type is a footgun. For example, we should ingest icons/config for
fakespot, but I forgot to list them.
Removed the `SuggestIngestionConstraints::max_suggestions` field. AFAICT,
it was not being used by any consumers and doesn't seem to be
implemented correctly. We apply the limit to each record type, but a
single ingestion request will have many record types. It also doesn't
seem clear to me how this should work, for example should the
config/icons count? I want to update this code, but I don't want to
worry about this field.
Updated the benchmark code to download all records/attachments directly,
rather than the complex system of running an ingestion to figure out
what to download. I'm planning on updating the client code and this
will work with the new system better.