`localizationkit` is a toolkit for ensuring that your localized strings are the best that they can be.
Included are tests for various things such as:
* Checking that all strings have comments
* Checking that the comments don't just match the value
* Check that tokens have position specifiers
* Check that no invalid tokens are included
with lots more to come.
## Getting started
### Configuration
To use the library, first off, create a configuration file that is in the TOML format. Here's an example:
```toml
default_language = "en"
[has_comments]
minimum_comment_length = 25
minimum_comment_words = 8
[token_matching]
allow_missing_defaults = true
[token_position_identifiers]
always = false
```
This configuration file sets that `en` is the default language (so this is the language that will be checked for comments, etc. and all tests will run relative to it). Then it sets various settings for each test. Every instance of `[something_here]` specifies that the following settings are for that test. For example, the test `has_comments` will now make sure that not only are there comments, but that they are at least 25 characters in length and 8 words in length.
Now we need to prepare the strings that will go in. Here's how you can create an individual string:
```python
from localizationkit import LocalizedString
my_string = LocalizedString("My string's key", "My string's value", "My strings comment", "en")
```
This creates a single string with a key, value and comment, with its language code set to `en`. Once you've created some more (usually for different languages too), you can bundle them into a collection:
Checks the similarity between a comment and the string's value in the default language. This is achieved via `difflib`'s `SequenceMatcher`. More details can be found [here](https://docs.python.org/3/library/difflib.html#difflib.SequenceMatcher.ratio)
| `maximum_similarity_ratio` | float | Between 0 and 1 | 0.5 | Set the maximum similarity ratio between the comment and the string value. The higher the value, the more similar they are. The longer the string the more accurate this will be. |
</details>
## Has Comments
Identifier: `has_comments`
Checks that strings have comments.
_Note: Only languages that have latin style scripts are really supported for the words check due to splitting on spaces to check._
| `minimum_comment_length` | int | Any integer | 30 | Set the minimum allowable length for a comment. Set the value to negative to not check. |
| `minimum_comment_words` | int | Any integer | 10 | Set the minimum allowable number of words for a comment. Set the value to negative to not check. |
Checks that strings do not contain Objective-C style alternative position tokens.
Objective-C seems to be allows positional tokens of the form `%1@` rather than `%1$@`. While not illegal, it is preferred that all tokens between languages are consistent so that tools don't experience unexpected failures, etc.
Checks that the tokens in a string match across all languages. e.g. If your English string is "Hello %s" but your French string is "Boujour", this would flag that there is a missing token in the French string.
| `allow_missing_defaults` | boolean | `true` or `false` | `false` | Due to the way that automated localization works, usually there will be a default language, and then other translations will come in over time. If a translation is deleted, it isn't always deleted from all languages immediately. Setting this parameter to true will allow any strings in your non-default language to be ignored if that string is missing from your default language. |
</details>
## Token Position Identifiers
Identifier: `token_position_identifiers`
Check that each token has a position specifier with it. e.g. `%s` is not allowed, but `%1$s` is. Tokens can move around in different languages, so position specifiers are extremely important.
| `always` | boolean | `true` or `false` | `false` | If a string only has a single token, it doesn't need a position specifier. Set this to `true` to require it even in those cases.