Add new illegal_subcategory field in abuse reports (#22395)

This commit is contained in:
William Durand 2024-06-24 11:37:01 +02:00 коммит произвёл GitHub
Родитель ebf179242b
Коммит 730205e529
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: B5690EEEBB952194
13 изменённых файлов: 1290 добавлений и 4 удалений

Просмотреть файл

@ -57,6 +57,7 @@ to if necessary.
:<json string|null reporter_name: The provided name of the reporter, if not authenticated.
:<json string|null reporter_email: The provided email of the reporter, if not authenticated.
:<json string|null illegal_category: The type of illegal content - only required when the reason is set to ``illegal``. The accepted values are documented in this :ref:`table <abuse-report-illegal_category-parameter>`.
:<json string|null illegal_subcategory: The specific violation - only required when the reason is set to ``illegal``. The accepted values are documented in this :ref:`table <abuse-report-illegal_subcategory-parameter>`.
:>json object|null reporter: The user who submitted the report, if authenticated.
:>json int reporter.id: The id of the user who submitted the report.
:>json string reporter.name: The name of the user who submitted the report.
@ -88,6 +89,7 @@ to if necessary.
:>json string|null operating_system_version: The client's operating system version.
:>json string|null reason: The reason for the report.
:>json string|null illegal_category: The type of illegal content - only defined when the reason is set to ``illegal``.
:>json string|null illegal_subcategory: The specific violation - only defined when the reason is set to ``illegal``.
.. _abuse-report_entry_point-parameter:
@ -254,6 +256,78 @@ to if necessary.
other Other
================================================ ================================================
.. _abuse-report-illegal_subcategory-parameter:
Accepted values for the ``illegal_subcategory`` parameter:
================================================ ============================================ =============================================================================================
Illegal category Value Description
================================================ ============================================ =============================================================================================
animal_welfare other Something else
consumer_information insufficient_information_on_traders Insufficient information on traders
consumer_information noncompliance_pricing Non-compliance with pricing regulations
consumer_information hidden_advertisement Hidden advertisement or commercial communication, including by influencers
consumer_information misleading_info_goods_services Misleading information about the characteristics of the goods and services
consumer_information misleading_info_consumer_rights Misleading information about the consumers rights
consumer_information other Something else
data_protection_and_privacy_violations biometric_data_breach Biometric data breach
data_protection_and_privacy_violations missing_processing_ground Missing processing ground for data
data_protection_and_privacy_violations right_to_be_forgotten Right to be forgotten
data_protection_and_privacy_violations data_falsification Data falsification
data_protection_and_privacy_violations other Something else
illegal_or_harmful_speech defamation Defamation
illegal_or_harmful_speech discrimination Discrimination
illegal_or_harmful_speech hate_speech Illegal incitement to violence and hatred based on protected characteristics (hate speech)
illegal_or_harmful_speech other Something else
intellectual_property_infringements design_infringement Design infringements
intellectual_property_infringements geographic_indications_infringement Geographical indications infringements
intellectual_property_infringements patent_infringement Patent infringements
intellectual_property_infringements trade_secret_infringement Trade secret infringements
intellectual_property_infringements other Something else
negative_effects_on_civic_discourse_or_elections violation_eu_law Violation of EU law relevant to civic discourse or elections
negative_effects_on_civic_discourse_or_elections violation_national_law Violation of national law relevant to civic discourse or elections
negative_effects_on_civic_discourse_or_elections misinformation_disinformation_disinformation Misinformation, disinformation, foreign information manipulation and interference
negative_effects_on_civic_discourse_or_elections other Something else
non_consensual_behaviour non_consensual_image_sharing Non-consensual image sharing
non_consensual_behaviour non_consensual_items_deepfake Non-consensual items containing deepfake or similar technology using a third party's features
non_consensual_behaviour online_bullying_intimidation Online bullying/intimidation
non_consensual_behaviour stalking Stalking
non_consensual_behaviour other Something else
pornography_or_sexualized_content adult_sexual_material Adult sexual material
pornography_or_sexualized_content image_based_sexual_abuse Image-based sexual abuse (excluding content depicting minors)
pornography_or_sexualized_content other Something else
protection_of_minors age_specific_restrictions_minors Age-specific restrictions concerning minors
protection_of_minors child_sexual_abuse_material Child sexual abuse material
protection_of_minors grooming_sexual_enticement_minors Grooming/sexual enticement of minors
protection_of_minors other Something else
risk_for_public_security illegal_organizations Illegal organizations
risk_for_public_security risk_environmental_damage Risk for environmental damage
risk_for_public_security risk_public_health Risk for public health
risk_for_public_security terrorist_content Terrorist content
risk_for_public_security other Something else
scams_and_fraud inauthentic_accounts Inauthentic accounts
scams_and_fraud inauthentic_listings Inauthentic listings
scams_and_fraud inauthentic_user_reviews Inauthentic user reviews
scams_and_fraud impersonation_account_hijacking Impersonation or account hijacking
scams_and_fraud phishing Phishing
scams_and_fraud pyramid_schemes Pyramid schemes
scams_and_fraud other Something else
self_harm content_promoting_eating_disorders Content promoting eating disorders
self_harm self_mutilation Self-mutilation
self_harm suicide Suicide
self_harm other Something else
unsafe_and_prohibited_products prohibited_products Prohibited or restricted products
unsafe_and_prohibited_products unsafe_products Unsafe or non-compliant products
unsafe_and_prohibited_products other Something else
violence coordinated_harm Coordinated harm
violence gender_based_violence Gender-based violence
violence human_exploitation Human exploitation
violence human_trafficking Human trafficking
violence incitement_violence_hatred General calls or incitement to violence and/or hatred
violence other Something else
other other Something else
================================================ ============================================ =============================================================================================
------------------------------
Submitting a user abuse report
@ -276,6 +350,7 @@ so reports can be responded to if necessary.
:<json string|null reporter_name: The provided name of the reporter, if not authenticated.
:<json string|null reporter_email: The provided email of the reporter, if not authenticated.
:<json string|null illegal_category: The type of illegal content - only required when the reason is set to ``illegal``. The accepted values are documented in this :ref:`table <abuse-report-illegal_category-parameter>`.
:<json string|null illegal_subcategory: The specific violation - only required when the reason is set to ``illegal``. The accepted values are documented in this :ref:`table <abuse-report-illegal_subcategory-parameter>`.
:>json object|null reporter: The user who submitted the report, if authenticated.
:>json int reporter.id: The id of the user who submitted the report.
:>json string reporter.name: The name of the user who submitted the report.
@ -291,6 +366,7 @@ so reports can be responded to if necessary.
:>json string message: The body/content of the abuse report.
:>json string|null lang: The language code of the locale used by the client for the application.
:>json string|null illegal_category: The type of illegal content - only defined when the reason is set to ``illegal``.
:>json string|null illegal_subcategory: The specific violation - only defined when the reason is set to ``illegal``.
.. _abuse-user-reason-parameter:
@ -327,6 +403,7 @@ so reports can be responded to if necessary.
:<json string|null reporter_name: The provided name of the reporter, if not authenticated.
:<json string|null reporter_email: The provided email of the reporter, if not authenticated.
:<json string|null illegal_category: The type of illegal content - only required when the reason is set to ``illegal``. The accepted values are documented in this :ref:`table <abuse-report-illegal_category-parameter>`.
:<json string|null illegal_subcategory: The specific violation - only required when the reason is set to ``illegal``. The accepted values are documented in this :ref:`table <abuse-report-illegal_subcategory-parameter>`.
:>json object|null reporter: The user who submitted the report, if authenticated.
:>json int reporter.id: The id of the user who submitted the report.
:>json string reporter.name: The name of the user who submitted the report.
@ -340,6 +417,7 @@ so reports can be responded to if necessary.
:>json string|null lang: The language code of the locale used by the client for the application.
:>json string|null reason: The reason for the report.
:>json string|null illegal_category: The type of illegal content - only defined when the reason is set to ``illegal``.
:>json string|null illegal_subcategory: The specific violation - only defined when the reason is set to ``illegal``.
.. _abuse-rating-reason-parameter:
@ -376,6 +454,7 @@ so reports can be responded to if necessary.
:<json string|null reporter_name: The provided name of the reporter, if not authenticated.
:<json string|null reporter_email: The provided email of the reporter, if not authenticated.
:<json string|null illegal_category: The type of illegal content - only required when the reason is set to ``illegal``. The accepted values are documented in this :ref:`table <abuse-report-illegal_category-parameter>`.
:<json string|null illegal_subcategory: The specific violation - only required when the reason is set to ``illegal``. The accepted values are documented in this :ref:`table <abuse-report-illegal_subcategory-parameter>`.
:>json object|null reporter: The user who submitted the report, if authenticated.
:>json int reporter.id: The id of the user who submitted the report.
:>json string reporter.name: The name of the user who submitted the report.
@ -388,6 +467,7 @@ so reports can be responded to if necessary.
:>json string message: The body/content of the abuse report.
:>json string|null lang: The language code of the locale used by the client for the application.
:>json string|null illegal_category: The type of illegal content - only defined when the reason is set to ``illegal``.
:>json string|null illegal_subcategory: The specific violation - only defined when the reason is set to ``illegal``.
.. _abuse-collection-reason-parameter:

Просмотреть файл

@ -470,6 +470,7 @@ These are `v5` specific changes - `v4` changes apply also.
* 2023-11-09: removed reviewers /enable and /disable endpoints. https://github.com/mozilla/addons-server/issues/21356
* 2023-12-07: added ``lang`` parameter to all /abuse/report/ endpoints. https://github.com/mozilla/addons-server/issues/21529
* 2024-06-20: added ``illegal_category`` parameter to all /abuse/report/ endpoints. https://github.com/mozilla/addons/issues/14870
* 2024-06-20: added ``illegal_subcategory`` parameter to all /abuse/report/ endpoints. https://github.com/mozilla/addons/issues/14875
.. _`#11380`: https://github.com/mozilla/addons-server/issues/11380/
.. _`#11379`: https://github.com/mozilla/addons-server/issues/11379/

Просмотреть файл

@ -152,6 +152,7 @@ class AbuseReportAdmin(AMOModelAdmin):
'addon_card',
'location',
'illegal_category',
'illegal_subcategory',
)
fieldsets = (
('Abuse Report Core Information', {'fields': ('reason', 'message')}),
@ -180,6 +181,7 @@ class AbuseReportAdmin(AMOModelAdmin):
'report_entry_point',
'location',
'illegal_category',
'illegal_subcategory',
)
},
),

Просмотреть файл

@ -538,6 +538,11 @@ class CinderReport(CinderEntity):
if considers_illegal
else None
),
'illegal_subcategory': (
self.abuse_report.illegal_subcategory_cinder_value
if considers_illegal
else None
),
}
def report(self, *args, **kwargs):

Просмотреть файл

@ -0,0 +1,99 @@
# Generated by Django 4.2.13 on 2024-06-20 12:03
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('abuse', '0033_abusereport_illegal_category'),
]
operations = [
migrations.AddField(
model_name='abusereport',
name='illegal_subcategory',
field=models.PositiveSmallIntegerField(
blank=True,
choices=[
(None, 'None'),
(1, 'Something else'),
(2, 'Insufficient information on traders'),
(3, 'Non-compliance with pricing regulations'),
(
4,
'Hidden advertisement or commercial communication, including by influencers',
),
(
5,
'Misleading information about the characteristics of the goods and services',
),
(6, 'Misleading information about the consumers rights'),
(7, 'Biometric data breach'),
(8, 'Missing processing ground for data'),
(9, 'Right to be forgotten'),
(10, 'Data falsification'),
(11, 'Defamation'),
(12, 'Discrimination'),
(
13,
'Illegal incitement to violence and hatred based on protected characteristics (hate speech)',
),
(14, 'Design infringements'),
(15, 'Geographical indications infringements'),
(16, 'Patent infringements'),
(17, 'Trade secret infringements'),
(
18,
'Violation of EU law relevant to civic discourse or elections',
),
(
19,
'Violation of national law relevant to civic discourse or elections',
),
(
20,
'Misinformation, disinformation, foreign information manipulation and interference',
),
(21, 'Non-consensual image sharing'),
(
22,
"Non-consensual items containing deepfake or similar technology using a third party's features",
),
(23, 'Online bullying/intimidation'),
(24, 'Stalking'),
(25, 'Adult sexual material'),
(
26,
'Image-based sexual abuse (excluding content depicting minors)',
),
(27, 'Age-specific restrictions concerning minors'),
(28, 'Child sexual abuse material'),
(29, 'Grooming/sexual enticement of minors'),
(30, 'Illegal organizations'),
(31, 'Risk for environmental damage'),
(32, 'Risk for public health'),
(33, 'Terrorist content'),
(34, 'Inauthentic accounts'),
(35, 'Inauthentic listings'),
(36, 'Inauthentic user reviews'),
(37, 'Impersonation or account hijacking'),
(38, 'Phishing'),
(39, 'Pyramid schemes'),
(40, 'Content promoting eating disorders'),
(41, 'Self-mutilation'),
(42, 'Suicide'),
(43, 'Prohibited or restricted products'),
(44, 'Unsafe or non-compliant products'),
(45, 'Coordinated harm'),
(46, 'Gender-based violence'),
(47, 'Human exploitation'),
(48, 'Human trafficking'),
(49, 'General calls or incitement to violence and/or hatred'),
],
default=None,
help_text='Specific violation of illegal content',
null=True,
),
),
]

Просмотреть файл

@ -18,6 +18,7 @@ from olympia.constants.abuse import (
APPEAL_EXPIRATION_DAYS,
DECISION_ACTIONS,
ILLEGAL_CATEGORIES,
ILLEGAL_SUBCATEGORIES,
)
from olympia.ratings.models import Rating
from olympia.users.models import UserProfile
@ -638,6 +639,13 @@ class AbuseReport(ModelBase):
null=True,
help_text='Type of illegal content',
)
illegal_subcategory = models.PositiveSmallIntegerField(
default=None,
choices=ILLEGAL_SUBCATEGORIES.choices,
blank=True,
null=True,
help_text='Specific violation of illegal content',
)
objects = AbuseReportManager()
@ -742,6 +750,14 @@ class AbuseReport(ModelBase):
const = ILLEGAL_CATEGORIES.for_value(self.illegal_category).constant
return f'STATEMENT_CATEGORY_{const}'
@property
def illegal_subcategory_cinder_value(self):
if not self.illegal_subcategory:
return None
# We should send "normalized" constants to Cinder.
const = ILLEGAL_SUBCATEGORIES.for_value(self.illegal_subcategory).constant
return f'KEYWORD_{const}'
class CantBeAppealed(Exception):
pass

Просмотреть файл

@ -10,7 +10,11 @@ from olympia.accounts.serializers import BaseUserSerializer
from olympia.api.exceptions import UnavailableForLegalReasons
from olympia.api.fields import ReverseChoiceField
from olympia.api.serializers import AMOModelSerializer
from olympia.constants.abuse import ILLEGAL_CATEGORIES
from olympia.constants.abuse import (
ILLEGAL_CATEGORIES,
ILLEGAL_SUBCATEGORIES,
ILLEGAL_SUBCATEGORIES_BY_CATEGORY,
)
from .models import AbuseReport
from .tasks import report_to_cinder
@ -57,6 +61,11 @@ class BaseAbuseReportSerializer(AMOModelSerializer):
required=False,
allow_null=True,
)
illegal_subcategory = ReverseChoiceField(
choices=list(ILLEGAL_SUBCATEGORIES.api_choices),
required=False,
allow_null=True,
)
class Meta:
model = AbuseReport
@ -68,6 +77,7 @@ class BaseAbuseReportSerializer(AMOModelSerializer):
'reporter_name',
'reporter_email',
'illegal_category',
'illegal_subcategory',
)
def validate(self, data):
@ -93,6 +103,38 @@ class BaseAbuseReportSerializer(AMOModelSerializer):
elif data.get('illegal_category') is None:
msg = serializers.Field.default_error_messages['null']
raise serializers.ValidationError({'illegal_category': [msg]})
elif data.get('illegal_category') is not None:
msg = (
'This value must be omitted or set to "null" when the `reason` is not '
'"illegal".'
)
raise serializers.ValidationError({'illegal_category': [msg]})
# When the reason is "illegal", the `illegal_subcategory` field is also
# required. In addition, the subcategory depends on the category set.
if data.get('reason') == AbuseReport.REASONS.ILLEGAL:
subcategory = data.get('illegal_subcategory')
valid_subcategories = ILLEGAL_SUBCATEGORIES_BY_CATEGORY.get(
data.get('illegal_category'), []
)
if 'illegal_subcategory' not in data:
msg = serializers.Field.default_error_messages['required']
raise serializers.ValidationError({'illegal_subcategory': [msg]})
elif subcategory is None:
msg = serializers.Field.default_error_messages['null']
raise serializers.ValidationError({'illegal_subcategory': [msg]})
elif subcategory not in valid_subcategories:
msg = (
'This value cannot be used in combination with the '
'supplied `illegal_category`.'
)
raise serializers.ValidationError({'illegal_subcategory': [msg]})
elif data.get('illegal_subcategory') is not None:
msg = (
'This value must be omitted or set to "null" when the `reason` is not '
'"illegal".'
)
raise serializers.ValidationError({'illegal_subcategory': [msg]})
return data

Просмотреть файл

@ -22,7 +22,11 @@ from olympia.amo.tests import (
)
from olympia.amo.tests.test_helpers import get_image_path
from olympia.bandwagon.models import Collection, CollectionAddon
from olympia.constants.abuse import DECISION_ACTIONS, ILLEGAL_CATEGORIES
from olympia.constants.abuse import (
DECISION_ACTIONS,
ILLEGAL_CATEGORIES,
ILLEGAL_SUBCATEGORIES,
)
from olympia.constants.promoted import NOT_PROMOTED, NOTABLE, RECOMMENDED
from olympia.ratings.models import Rating
from olympia.reviewers.models import NeedsHumanReview
@ -242,6 +246,7 @@ class TestCinderAddon(BaseTestCinderCase, TestCase):
'reason': None,
'considers_illegal': False,
'illegal_category': None,
'illegal_subcategory': None,
},
'entity_type': 'amo_report',
}
@ -276,6 +281,7 @@ class TestCinderAddon(BaseTestCinderCase, TestCase):
'reason': None,
'considers_illegal': False,
'illegal_category': None,
'illegal_subcategory': None,
},
'entity_type': 'amo_report',
},
@ -322,6 +328,7 @@ class TestCinderAddon(BaseTestCinderCase, TestCase):
'reason': None,
'considers_illegal': False,
'illegal_category': None,
'illegal_subcategory': None,
},
'entity_type': 'amo_report',
},
@ -404,6 +411,7 @@ class TestCinderAddon(BaseTestCinderCase, TestCase):
'reason': None,
'considers_illegal': False,
'illegal_category': None,
'illegal_subcategory': None,
},
'entity_type': 'amo_report',
}
@ -470,6 +478,7 @@ class TestCinderAddon(BaseTestCinderCase, TestCase):
'reason': None,
'considers_illegal': False,
'illegal_category': None,
'illegal_subcategory': None,
},
'entity_type': 'amo_report',
}
@ -523,6 +532,7 @@ class TestCinderAddon(BaseTestCinderCase, TestCase):
'reason': None,
'considers_illegal': False,
'illegal_category': None,
'illegal_subcategory': None,
},
'entity_type': 'amo_report',
},
@ -571,6 +581,7 @@ class TestCinderAddon(BaseTestCinderCase, TestCase):
'reason': None,
'considers_illegal': False,
'illegal_category': None,
'illegal_subcategory': None,
},
'entity_type': 'amo_report',
},
@ -642,6 +653,7 @@ class TestCinderAddon(BaseTestCinderCase, TestCase):
'reason': None,
'considers_illegal': False,
'illegal_category': None,
'illegal_subcategory': None,
},
'entity_type': 'amo_report',
},
@ -753,6 +765,7 @@ class TestCinderAddon(BaseTestCinderCase, TestCase):
'reason': None,
'considers_illegal': False,
'illegal_category': None,
'illegal_subcategory': None,
},
'entity_type': 'amo_report',
},
@ -846,6 +859,7 @@ class TestCinderAddon(BaseTestCinderCase, TestCase):
'reason': None,
'considers_illegal': False,
'illegal_category': None,
'illegal_subcategory': None,
},
'entity_type': 'amo_report',
},
@ -910,6 +924,7 @@ class TestCinderAddon(BaseTestCinderCase, TestCase):
'reason': None,
'considers_illegal': False,
'illegal_category': None,
'illegal_subcategory': None,
},
'entity_type': 'amo_report',
},
@ -1351,6 +1366,7 @@ class TestCinderUser(BaseTestCinderCase, TestCase):
'reason': None,
'considers_illegal': False,
'illegal_category': None,
'illegal_subcategory': None,
},
'entity_type': 'amo_report',
}
@ -1385,6 +1401,7 @@ class TestCinderUser(BaseTestCinderCase, TestCase):
'reason': None,
'considers_illegal': False,
'illegal_category': None,
'illegal_subcategory': None,
},
'entity_type': 'amo_report',
},
@ -1431,6 +1448,7 @@ class TestCinderUser(BaseTestCinderCase, TestCase):
'reason': None,
'considers_illegal': False,
'illegal_category': None,
'illegal_subcategory': None,
},
'entity_type': 'amo_report',
},
@ -1499,6 +1517,7 @@ class TestCinderUser(BaseTestCinderCase, TestCase):
'reason': None,
'considers_illegal': False,
'illegal_category': None,
'illegal_subcategory': None,
},
'entity_type': 'amo_report',
},
@ -1574,6 +1593,7 @@ class TestCinderUser(BaseTestCinderCase, TestCase):
'reason': None,
'considers_illegal': False,
'illegal_category': None,
'illegal_subcategory': None,
},
'entity_type': 'amo_report',
},
@ -1626,6 +1646,7 @@ class TestCinderUser(BaseTestCinderCase, TestCase):
'reason': None,
'considers_illegal': False,
'illegal_category': None,
'illegal_subcategory': None,
},
'entity_type': 'amo_report',
},
@ -1722,6 +1743,7 @@ class TestCinderUser(BaseTestCinderCase, TestCase):
'reason': None,
'considers_illegal': False,
'illegal_category': None,
'illegal_subcategory': None,
},
'entity_type': 'amo_report',
}
@ -1803,6 +1825,7 @@ class TestCinderUser(BaseTestCinderCase, TestCase):
'reason': None,
'considers_illegal': False,
'illegal_category': None,
'illegal_subcategory': None,
},
'entity_type': 'amo_report',
},
@ -2024,6 +2047,7 @@ class TestCinderRating(BaseTestCinderCase, TestCase):
'reason': None,
'considers_illegal': False,
'illegal_category': None,
'illegal_subcategory': None,
},
'entity_type': 'amo_report',
},
@ -2089,6 +2113,7 @@ class TestCinderRating(BaseTestCinderCase, TestCase):
'reason': None,
'considers_illegal': False,
'illegal_category': None,
'illegal_subcategory': None,
},
'entity_type': 'amo_report',
},
@ -2176,6 +2201,7 @@ class TestCinderRating(BaseTestCinderCase, TestCase):
'reason': None,
'considers_illegal': False,
'illegal_category': None,
'illegal_subcategory': None,
},
'entity_type': 'amo_report',
},
@ -2264,6 +2290,7 @@ class TestCinderCollection(BaseTestCinderCase, TestCase):
'reason': None,
'considers_illegal': False,
'illegal_category': None,
'illegal_subcategory': None,
},
'entity_type': 'amo_report',
},
@ -2334,6 +2361,7 @@ class TestCinderCollection(BaseTestCinderCase, TestCase):
'reason': None,
'considers_illegal': False,
'illegal_category': None,
'illegal_subcategory': None,
},
'entity_type': 'amo_report',
},
@ -2393,6 +2421,7 @@ class TestCinderReport(TestCase):
'reason': "DSA: It violates Mozilla's Add-on Policies",
'considers_illegal': False,
'illegal_category': None,
'illegal_subcategory': None,
}
def test_locale_in_attributes(self):
@ -2407,6 +2436,7 @@ class TestCinderReport(TestCase):
'reason': None,
'considers_illegal': False,
'illegal_category': None,
'illegal_subcategory': None,
}
def test_considers_illegal(self):
@ -2414,6 +2444,7 @@ class TestCinderReport(TestCase):
guid=addon_factory().guid,
reason=AbuseReport.REASONS.ILLEGAL,
illegal_category=ILLEGAL_CATEGORIES.ANIMAL_WELFARE,
illegal_subcategory=ILLEGAL_SUBCATEGORIES.OTHER,
)
assert self.cinder_class(abuse_report).get_attributes() == {
'id': str(abuse_report.pk),
@ -2425,4 +2456,5 @@ class TestCinderReport(TestCase):
),
'considers_illegal': True,
'illegal_category': 'STATEMENT_CATEGORY_ANIMAL_WELFARE',
'illegal_subcategory': 'KEYWORD_OTHER',
}

Просмотреть файл

@ -25,6 +25,7 @@ from olympia.constants.abuse import (
APPEAL_EXPIRATION_DAYS,
DECISION_ACTIONS,
ILLEGAL_CATEGORIES,
ILLEGAL_SUBCATEGORIES,
)
from olympia.ratings.models import Rating
from olympia.reviewers.models import NeedsHumanReview
@ -301,6 +302,131 @@ class TestAbuse(TestCase):
(15, 'other'),
)
assert ILLEGAL_SUBCATEGORIES.choices == (
(None, 'None'),
(1, 'Something else'),
(2, 'Insufficient information on traders'),
(3, 'Non-compliance with pricing regulations'),
(
4,
'Hidden advertisement or commercial communication, including '
'by influencers',
),
(
5,
'Misleading information about the characteristics of the goods '
'and services',
),
(6, 'Misleading information about the consumers rights'),
(7, 'Biometric data breach'),
(8, 'Missing processing ground for data'),
(9, 'Right to be forgotten'),
(10, 'Data falsification'),
(11, 'Defamation'),
(12, 'Discrimination'),
(
13,
'Illegal incitement to violence and hatred based on protected '
'characteristics (hate speech)',
),
(14, 'Design infringements'),
(15, 'Geographical indications infringements'),
(16, 'Patent infringements'),
(17, 'Trade secret infringements'),
(18, 'Violation of EU law relevant to civic discourse or elections'),
(19, 'Violation of national law relevant to civic discourse or elections'),
(
20,
'Misinformation, disinformation, foreign information manipulation '
'and interference',
),
(21, 'Non-consensual image sharing'),
(
22,
'Non-consensual items containing deepfake or similar technology '
"using a third party's features",
),
(23, 'Online bullying/intimidation'),
(24, 'Stalking'),
(25, 'Adult sexual material'),
(26, 'Image-based sexual abuse (excluding content depicting minors)'),
(27, 'Age-specific restrictions concerning minors'),
(28, 'Child sexual abuse material'),
(29, 'Grooming/sexual enticement of minors'),
(30, 'Illegal organizations'),
(31, 'Risk for environmental damage'),
(32, 'Risk for public health'),
(33, 'Terrorist content'),
(34, 'Inauthentic accounts'),
(35, 'Inauthentic listings'),
(36, 'Inauthentic user reviews'),
(37, 'Impersonation or account hijacking'),
(38, 'Phishing'),
(39, 'Pyramid schemes'),
(40, 'Content promoting eating disorders'),
(41, 'Self-mutilation'),
(42, 'Suicide'),
(43, 'Prohibited or restricted products'),
(44, 'Unsafe or non-compliant products'),
(45, 'Coordinated harm'),
(46, 'Gender-based violence'),
(47, 'Human exploitation'),
(48, 'Human trafficking'),
(49, 'General calls or incitement to violence and/or hatred'),
)
assert ILLEGAL_SUBCATEGORIES.api_choices == (
(None, None),
(1, 'other'),
(2, 'insufficient_information_on_traders'),
(3, 'noncompliance_pricing'),
(4, 'hidden_advertisement'),
(5, 'misleading_info_goods_services'),
(6, 'misleading_info_consumer_rights'),
(7, 'biometric_data_breach'),
(8, 'missing_processing_ground'),
(9, 'right_to_be_forgotten'),
(10, 'data_falsification'),
(11, 'defamation'),
(12, 'discrimination'),
(13, 'hate_speech'),
(14, 'design_infringement'),
(15, 'geographic_indications_infringement'),
(16, 'patent_infringement'),
(17, 'trade_secret_infringement'),
(18, 'violation_eu_law'),
(19, 'violation_national_law'),
(20, 'misinformation_disinformation_disinformation'),
(21, 'non_consensual_image_sharing'),
(22, 'non_consensual_items_deepfake'),
(23, 'online_bullying_intimidation'),
(24, 'stalking'),
(25, 'adult_sexual_material'),
(26, 'image_based_sexual_abuse'),
(27, 'age_specific_restrictions_minors'),
(28, 'child_sexual_abuse_material'),
(29, 'grooming_sexual_enticement_minors'),
(30, 'illegal_organizations'),
(31, 'risk_environmental_damage'),
(32, 'risk_public_health'),
(33, 'terrorist_content'),
(34, 'inauthentic_accounts'),
(35, 'inauthentic_listings'),
(36, 'inauthentic_user_reviews'),
(37, 'impersonation_account_hijacking'),
(38, 'phishing'),
(39, 'pyramid_schemes'),
(40, 'content_promoting_eating_disorders'),
(41, 'self_mutilation'),
(42, 'suicide'),
(43, 'prohibited_products'),
(44, 'unsafe_products'),
(45, 'coordinated_harm'),
(46, 'gender_based_violence'),
(47, 'human_exploitation'),
(48, 'human_trafficking'),
(49, 'incitement_violence_hatred'),
)
def test_type(self):
addon = addon_factory(guid='@lol')
report = AbuseReport.objects.create(guid=addon.guid)
@ -399,6 +525,10 @@ class TestAbuse(TestCase):
report = AbuseReport()
assert not report.illegal_category_cinder_value
def test_illegal_subcategory_cinder_value_no_illegal_subcategory(self):
report = AbuseReport()
assert not report.illegal_subcategory_cinder_value
class TestAbuseManager(TestCase):
def test_for_addon_finds_by_author(self):
@ -2446,3 +2576,186 @@ def test_illegal_category_cinder_value(illegal_category, expected):
illegal_category=illegal_category,
)
assert abuse_report.illegal_category_cinder_value == expected
@pytest.mark.django_db
@pytest.mark.parametrize(
'illegal_subcategory,expected',
[
(None, None),
(ILLEGAL_SUBCATEGORIES.OTHER, 'KEYWORD_OTHER'),
(
ILLEGAL_SUBCATEGORIES.INSUFFICIENT_INFORMATION_ON_TRADERS,
'KEYWORD_INSUFFICIENT_INFORMATION_ON_TRADERS',
),
(
ILLEGAL_SUBCATEGORIES.NONCOMPLIANCE_PRICING,
'KEYWORD_NONCOMPLIANCE_PRICING',
),
(
ILLEGAL_SUBCATEGORIES.HIDDEN_ADVERTISEMENT,
'KEYWORD_HIDDEN_ADVERTISEMENT',
),
(
ILLEGAL_SUBCATEGORIES.MISLEADING_INFO_GOODS_SERVICES,
'KEYWORD_MISLEADING_INFO_GOODS_SERVICES',
),
(
ILLEGAL_SUBCATEGORIES.MISLEADING_INFO_CONSUMER_RIGHTS,
'KEYWORD_MISLEADING_INFO_CONSUMER_RIGHTS',
),
(
ILLEGAL_SUBCATEGORIES.BIOMETRIC_DATA_BREACH,
'KEYWORD_BIOMETRIC_DATA_BREACH',
),
(
ILLEGAL_SUBCATEGORIES.MISSING_PROCESSING_GROUND,
'KEYWORD_MISSING_PROCESSING_GROUND',
),
(
ILLEGAL_SUBCATEGORIES.RIGHT_TO_BE_FORGOTTEN,
'KEYWORD_RIGHT_TO_BE_FORGOTTEN',
),
(
ILLEGAL_SUBCATEGORIES.DATA_FALSIFICATION,
'KEYWORD_DATA_FALSIFICATION',
),
(ILLEGAL_SUBCATEGORIES.DEFAMATION, 'KEYWORD_DEFAMATION'),
(ILLEGAL_SUBCATEGORIES.DISCRIMINATION, 'KEYWORD_DISCRIMINATION'),
(ILLEGAL_SUBCATEGORIES.HATE_SPEECH, 'KEYWORD_HATE_SPEECH'),
(
ILLEGAL_SUBCATEGORIES.DESIGN_INFRINGEMENT,
'KEYWORD_DESIGN_INFRINGEMENT',
),
(
ILLEGAL_SUBCATEGORIES.GEOGRAPHIC_INDICATIONS_INFRINGEMENT,
'KEYWORD_GEOGRAPHIC_INDICATIONS_INFRINGEMENT',
),
(
ILLEGAL_SUBCATEGORIES.PATENT_INFRINGEMENT,
'KEYWORD_PATENT_INFRINGEMENT',
),
(
ILLEGAL_SUBCATEGORIES.TRADE_SECRET_INFRINGEMENT,
'KEYWORD_TRADE_SECRET_INFRINGEMENT',
),
(
ILLEGAL_SUBCATEGORIES.VIOLATION_EU_LAW,
'KEYWORD_VIOLATION_EU_LAW',
),
(
ILLEGAL_SUBCATEGORIES.VIOLATION_NATIONAL_LAW,
'KEYWORD_VIOLATION_NATIONAL_LAW',
),
(
ILLEGAL_SUBCATEGORIES.MISINFORMATION_DISINFORMATION_DISINFORMATION,
'KEYWORD_MISINFORMATION_DISINFORMATION_DISINFORMATION',
),
(
ILLEGAL_SUBCATEGORIES.NON_CONSENSUAL_IMAGE_SHARING,
'KEYWORD_NON_CONSENSUAL_IMAGE_SHARING',
),
(
ILLEGAL_SUBCATEGORIES.NON_CONSENSUAL_ITEMS_DEEPFAKE,
'KEYWORD_NON_CONSENSUAL_ITEMS_DEEPFAKE',
),
(
ILLEGAL_SUBCATEGORIES.ONLINE_BULLYING_INTIMIDATION,
'KEYWORD_ONLINE_BULLYING_INTIMIDATION',
),
(ILLEGAL_SUBCATEGORIES.STALKING, 'KEYWORD_STALKING'),
(
ILLEGAL_SUBCATEGORIES.ADULT_SEXUAL_MATERIAL,
'KEYWORD_ADULT_SEXUAL_MATERIAL',
),
(
ILLEGAL_SUBCATEGORIES.IMAGE_BASED_SEXUAL_ABUSE,
'KEYWORD_IMAGE_BASED_SEXUAL_ABUSE',
),
(
ILLEGAL_SUBCATEGORIES.AGE_SPECIFIC_RESTRICTIONS_MINORS,
'KEYWORD_AGE_SPECIFIC_RESTRICTIONS_MINORS',
),
(
ILLEGAL_SUBCATEGORIES.CHILD_SEXUAL_ABUSE_MATERIAL,
'KEYWORD_CHILD_SEXUAL_ABUSE_MATERIAL',
),
(
ILLEGAL_SUBCATEGORIES.GROOMING_SEXUAL_ENTICEMENT_MINORS,
'KEYWORD_GROOMING_SEXUAL_ENTICEMENT_MINORS',
),
(
ILLEGAL_SUBCATEGORIES.ILLEGAL_ORGANIZATIONS,
'KEYWORD_ILLEGAL_ORGANIZATIONS',
),
(
ILLEGAL_SUBCATEGORIES.RISK_ENVIRONMENTAL_DAMAGE,
'KEYWORD_RISK_ENVIRONMENTAL_DAMAGE',
),
(
ILLEGAL_SUBCATEGORIES.RISK_PUBLIC_HEALTH,
'KEYWORD_RISK_PUBLIC_HEALTH',
),
(
ILLEGAL_SUBCATEGORIES.TERRORIST_CONTENT,
'KEYWORD_TERRORIST_CONTENT',
),
(
ILLEGAL_SUBCATEGORIES.INAUTHENTIC_ACCOUNTS,
'KEYWORD_INAUTHENTIC_ACCOUNTS',
),
(
ILLEGAL_SUBCATEGORIES.INAUTHENTIC_LISTINGS,
'KEYWORD_INAUTHENTIC_LISTINGS',
),
(
ILLEGAL_SUBCATEGORIES.INAUTHENTIC_USER_REVIEWS,
'KEYWORD_INAUTHENTIC_USER_REVIEWS',
),
(
ILLEGAL_SUBCATEGORIES.IMPERSONATION_ACCOUNT_HIJACKING,
'KEYWORD_IMPERSONATION_ACCOUNT_HIJACKING',
),
(ILLEGAL_SUBCATEGORIES.PHISHING, 'KEYWORD_PHISHING'),
(ILLEGAL_SUBCATEGORIES.PYRAMID_SCHEMES, 'KEYWORD_PYRAMID_SCHEMES'),
(
ILLEGAL_SUBCATEGORIES.CONTENT_PROMOTING_EATING_DISORDERS,
'KEYWORD_CONTENT_PROMOTING_EATING_DISORDERS',
),
(ILLEGAL_SUBCATEGORIES.SELF_MUTILATION, 'KEYWORD_SELF_MUTILATION'),
(ILLEGAL_SUBCATEGORIES.SUICIDE, 'KEYWORD_SUICIDE'),
(
ILLEGAL_SUBCATEGORIES.PROHIBITED_PRODUCTS,
'KEYWORD_PROHIBITED_PRODUCTS',
),
(ILLEGAL_SUBCATEGORIES.UNSAFE_PRODUCTS, 'KEYWORD_UNSAFE_PRODUCTS'),
(
ILLEGAL_SUBCATEGORIES.COORDINATED_HARM,
'KEYWORD_COORDINATED_HARM',
),
(
ILLEGAL_SUBCATEGORIES.GENDER_BASED_VIOLENCE,
'KEYWORD_GENDER_BASED_VIOLENCE',
),
(
ILLEGAL_SUBCATEGORIES.HUMAN_EXPLOITATION,
'KEYWORD_HUMAN_EXPLOITATION',
),
(
ILLEGAL_SUBCATEGORIES.HUMAN_TRAFFICKING,
'KEYWORD_HUMAN_TRAFFICKING',
),
(
ILLEGAL_SUBCATEGORIES.INCITEMENT_VIOLENCE_HATRED,
'KEYWORD_INCITEMENT_VIOLENCE_HATRED',
),
],
)
def test_illegal_subcategory_cinder_value(illegal_subcategory, expected):
addon = addon_factory()
abuse_report = AbuseReport.objects.create(
guid=addon.guid,
reason=AbuseReport.REASONS.ILLEGAL,
illegal_subcategory=illegal_subcategory,
)
assert abuse_report.illegal_subcategory_cinder_value == expected

Просмотреть файл

@ -16,7 +16,7 @@ from olympia.abuse.serializers import (
)
from olympia.accounts.serializers import BaseUserSerializer
from olympia.amo.tests import TestCase, addon_factory, collection_factory, user_factory
from olympia.constants.abuse import ILLEGAL_CATEGORIES
from olympia.constants.abuse import ILLEGAL_CATEGORIES, ILLEGAL_SUBCATEGORIES
from olympia.ratings.models import Rating
@ -63,6 +63,7 @@ class TestAddonAbuseReportSerializer(TestCase):
'report_entry_point': None,
'location': None,
'illegal_category': None,
'illegal_subcategory': None,
}
def test_guid_report_addon_exists_doesnt_matter(self):
@ -94,6 +95,7 @@ class TestAddonAbuseReportSerializer(TestCase):
'report_entry_point': None,
'location': None,
'illegal_category': None,
'illegal_subcategory': None,
}
def test_guid_report(self):
@ -124,6 +126,7 @@ class TestAddonAbuseReportSerializer(TestCase):
'report_entry_point': None,
'location': None,
'illegal_category': None,
'illegal_subcategory': None,
}
def test_guid_report_to_internal_value_with_some_fancy_parameters(self):
@ -275,6 +278,7 @@ class TestUserAbuseReportSerializer(TestCase):
'lang': None,
'reason': None,
'illegal_category': None,
'illegal_subcategory': None,
}
@ -293,6 +297,7 @@ class TestRatingAbuseReportSerializer(TestCase):
message='bad stuff',
reason=AbuseReport.REASONS.ILLEGAL,
illegal_category=ILLEGAL_CATEGORIES.ANIMAL_WELFARE,
illegal_subcategory=ILLEGAL_SUBCATEGORIES.OTHER,
)
request = RequestFactory().get('/')
request.user = AnonymousUser()
@ -314,6 +319,7 @@ class TestRatingAbuseReportSerializer(TestCase):
'message': 'bad stuff',
'lang': None,
'illegal_category': 'animal_welfare',
'illegal_subcategory': 'other',
}
@ -348,4 +354,5 @@ class TestCollectionAbuseReportSerializer(TestCase):
'message': 'this is some spammy stûff',
'lang': None,
'illegal_category': None,
'illegal_subcategory': None,
}

Просмотреть файл

@ -14,7 +14,11 @@ from olympia import amo
from olympia.abuse.tasks import flag_high_abuse_reports_addons_according_to_review_tier
from olympia.activity.models import ActivityLog
from olympia.amo.tests import TestCase, addon_factory, days_ago, user_factory
from olympia.constants.abuse import DECISION_ACTIONS, ILLEGAL_CATEGORIES
from olympia.constants.abuse import (
DECISION_ACTIONS,
ILLEGAL_CATEGORIES,
ILLEGAL_SUBCATEGORIES,
)
from olympia.constants.reviewers import EXTRA_REVIEW_TARGET_PER_DAY_CONFIG_KEY
from olympia.files.models import File
from olympia.reviewers.models import NeedsHumanReview, ReviewActionReason, UsageTier
@ -208,6 +212,7 @@ def test_addon_report_to_cinder(statsd_incr_mock):
reason=AbuseReport.REASONS.ILLEGAL,
message='This is bad',
illegal_category=ILLEGAL_CATEGORIES.OTHER,
illegal_subcategory=ILLEGAL_SUBCATEGORIES.OTHER,
)
assert not CinderJob.objects.exists()
responses.add(
@ -236,6 +241,7 @@ def test_addon_report_to_cinder(statsd_incr_mock):
'violates the law',
'considers_illegal': True,
'illegal_category': 'STATEMENT_CATEGORY_OTHER',
'illegal_subcategory': 'KEYWORD_OTHER',
},
'entity_type': 'amo_report',
}
@ -291,6 +297,7 @@ def test_addon_report_to_cinder_exception(statsd_incr_mock):
reason=AbuseReport.REASONS.ILLEGAL,
message='This is bad',
illegal_category=ILLEGAL_CATEGORIES.OTHER,
illegal_subcategory=ILLEGAL_SUBCATEGORIES.OTHER,
)
assert not CinderJob.objects.exists()
responses.add(
@ -323,6 +330,7 @@ def test_addon_report_to_cinder_different_locale():
message='This is bad',
application_locale='fr',
illegal_category=ILLEGAL_CATEGORIES.OTHER,
illegal_subcategory=ILLEGAL_SUBCATEGORIES.OTHER,
)
assert not CinderJob.objects.exists()
responses.add(
@ -350,6 +358,7 @@ def test_addon_report_to_cinder_different_locale():
'violates the law',
'considers_illegal': True,
'illegal_category': 'STATEMENT_CATEGORY_OTHER',
'illegal_subcategory': 'KEYWORD_OTHER',
},
'entity_type': 'amo_report',
}
@ -411,6 +420,7 @@ def test_addon_appeal_to_cinder_reporter(statsd_incr_mock):
reporter_email='m@r.io',
cinder_job=cinder_job,
illegal_category=ILLEGAL_CATEGORIES.OTHER,
illegal_subcategory=ILLEGAL_SUBCATEGORIES.OTHER,
)
responses.add(
responses.POST,
@ -472,6 +482,7 @@ def test_addon_appeal_to_cinder_reporter_exception(statsd_incr_mock):
reporter_email='m@r.io',
cinder_job=cinder_job,
illegal_category=ILLEGAL_CATEGORIES.OTHER,
illegal_subcategory=ILLEGAL_SUBCATEGORIES.OTHER,
)
responses.add(
responses.POST,
@ -511,6 +522,7 @@ def test_addon_appeal_to_cinder_authenticated_reporter():
cinder_job=cinder_job,
reporter=user,
illegal_category=ILLEGAL_CATEGORIES.OTHER,
illegal_subcategory=ILLEGAL_SUBCATEGORIES.OTHER,
)
responses.add(
responses.POST,

Просмотреть файл

@ -567,6 +567,42 @@ class AddonAbuseViewSetTestBase:
self._setup_reportable_reason('feedback_spam')
task_mock.assert_not_called()
def test_reject_illegal_category_when_reason_is_not_illegal(self):
addon = addon_factory(guid='@badman')
response = self.client.post(
self.url,
data={
'addon': addon.guid,
'reason': 'feedback_spam',
'illegal_category': 'other',
},
)
assert response.status_code == 400
assert json.loads(response.content) == {
'illegal_category': [
'This value must be omitted or set to "null" when the `reason` is '
'not "illegal".'
],
}
def test_reject_illegal_subcategory_when_reason_is_not_illegal(self):
addon = addon_factory(guid='@badman')
response = self.client.post(
self.url,
data={
'addon': addon.guid,
'reason': 'feedback_spam',
'illegal_subcategory': 'other',
},
)
assert response.status_code == 400
assert json.loads(response.content) == {
'illegal_subcategory': [
'This value must be omitted or set to "null" when the `reason` is '
'not "illegal".'
],
}
def test_illegal_category_required_when_reason_is_illegal(self):
addon = addon_factory(guid='@badman')
response = self.client.post(
@ -607,6 +643,72 @@ class AddonAbuseViewSetTestBase:
'illegal_category': ['This field may not be null.']
}
def test_illegal_subcategory_required_when_reason_is_illegal(self):
addon = addon_factory(guid='@badman')
response = self.client.post(
self.url,
data={
'addon': addon.guid,
'reason': 'illegal',
'illegal_category': 'animal_welfare',
},
)
assert response.status_code == 400
assert json.loads(response.content) == {
'illegal_subcategory': ['This field is required.']
}
def test_illegal_subcategory_cannot_be_blank_when_reason_is_illegal(self):
addon = addon_factory(guid='@badman')
response = self.client.post(
self.url,
data={
'addon': addon.guid,
'reason': 'illegal',
'illegal_category': 'animal_welfare',
'illegal_subcategory': '',
},
)
assert response.status_code == 400
assert json.loads(response.content) == {
'illegal_subcategory': ['"" is not a valid choice.']
}
def test_illegal_subcategory_cannot_be_null_when_reason_is_illegal(self):
addon = addon_factory(guid='@badman')
response = self.client.post(
self.url,
data={
'addon': addon.guid,
'reason': 'illegal',
'illegal_category': 'animal_welfare',
'illegal_subcategory': None,
},
)
assert response.status_code == 400
assert json.loads(response.content) == {
'illegal_subcategory': ['This field may not be null.']
}
def test_illegal_subcategory_depends_on_category(self):
addon = addon_factory(guid='@badman')
response = self.client.post(
self.url,
data={
'addon': addon.guid,
'reason': 'illegal',
'illegal_category': 'animal_welfare',
'illegal_subcategory': 'biometric_data_breach',
},
)
assert response.status_code == 400
assert json.loads(response.content) == {
'illegal_subcategory': [
'This value cannot be used in combination with the supplied '
'`illegal_category`.'
]
}
class TestAddonAbuseViewSetLoggedOut(AddonAbuseViewSetTestBase, TestCase):
def check_reporter(self, report):
@ -728,6 +830,7 @@ class UserAbuseViewSetTestBase:
'user': str(user.username),
'reason': 'illegal',
'illegal_category': 'animal_welfare',
'illegal_subcategory': 'other',
},
)
assert response.status_code == 201
@ -754,6 +857,7 @@ class UserAbuseViewSetTestBase:
'reason': 'illegal',
'message': 'Fine!',
'illegal_category': 'animal_welfare',
'illegal_subcategory': 'other',
},
)
assert response.status_code == 201
@ -842,6 +946,42 @@ class UserAbuseViewSetTestBase:
self.check_report(report, f'Abuse Report for User {user.pk}')
assert report.application_locale == 'Lô-käl'
def test_reject_illegal_category_when_reason_is_not_illegal(self):
user = user_factory()
response = self.client.post(
self.url,
data={
'user': str(user.username),
'reason': 'feedback_spam',
'illegal_category': 'other',
},
)
assert response.status_code == 400
assert json.loads(response.content) == {
'illegal_category': [
'This value must be omitted or set to "null" when the `reason` is '
'not "illegal".'
],
}
def test_reject_illegal_subcategory_when_reason_is_not_illegal(self):
user = user_factory()
response = self.client.post(
self.url,
data={
'user': str(user.username),
'reason': 'feedback_spam',
'illegal_subcategory': 'other',
},
)
assert response.status_code == 400
assert json.loads(response.content) == {
'illegal_subcategory': [
'This value must be omitted or set to "null" when the `reason` is '
'not "illegal".'
],
}
def test_illegal_category_required_when_reason_is_illegal(self):
user = user_factory()
response = self.client.post(
@ -882,6 +1022,72 @@ class UserAbuseViewSetTestBase:
'illegal_category': ['This field may not be null.']
}
def test_illegal_subcategory_required_when_reason_is_illegal(self):
user = user_factory()
response = self.client.post(
self.url,
data={
'user': str(user.username),
'reason': 'illegal',
'illegal_category': 'animal_welfare',
},
)
assert response.status_code == 400
assert json.loads(response.content) == {
'illegal_subcategory': ['This field is required.']
}
def test_illegal_subcategory_cannot_be_blank_when_reason_is_illegal(self):
user = user_factory()
response = self.client.post(
self.url,
data={
'user': str(user.username),
'reason': 'illegal',
'illegal_category': 'animal_welfare',
'illegal_subcategory': '',
},
)
assert response.status_code == 400
assert json.loads(response.content) == {
'illegal_subcategory': ['"" is not a valid choice.']
}
def test_illegal_subcategory_cannot_be_null_when_reason_is_illegal(self):
user = user_factory()
response = self.client.post(
self.url,
data={
'user': str(user.username),
'reason': 'illegal',
'illegal_category': 'animal_welfare',
'illegal_subcategory': None,
},
)
assert response.status_code == 400
assert json.loads(response.content) == {
'illegal_subcategory': ['This field may not be null.']
}
def test_illegal_subcategory_depends_on_category(self):
user = user_factory()
response = self.client.post(
self.url,
data={
'user': str(user.username),
'reason': 'illegal',
'illegal_category': 'animal_welfare',
'illegal_subcategory': 'biometric_data_breach',
},
)
assert response.status_code == 400
assert json.loads(response.content) == {
'illegal_subcategory': [
'This value cannot be used in combination with the supplied '
'`illegal_category`.'
]
}
class TestUserAbuseViewSetLoggedOut(UserAbuseViewSetTestBase, TestCase):
def check_reporter(self, report):
@ -1375,6 +1581,7 @@ class RatingAbuseViewSetTestBase:
'message': 'abuse!',
'reason': 'illegal',
'illegal_category': 'animal_welfare',
'illegal_subcategory': 'other',
},
REMOTE_ADDR='123.45.67.89',
)
@ -1395,6 +1602,7 @@ class RatingAbuseViewSetTestBase:
'message': 'abuse!',
'reason': 'illegal',
'illegal_category': 'animal_welfare',
'illegal_subcategory': 'other',
},
REMOTE_ADDR='123.45.67.89',
)
@ -1411,6 +1619,7 @@ class RatingAbuseViewSetTestBase:
'message': 'abuse!',
'reason': 'illegal',
'illegal_category': 'animal_welfare',
'illegal_subcategory': 'other',
},
)
assert response.status_code == 400
@ -1444,6 +1653,7 @@ class RatingAbuseViewSetTestBase:
'message': '',
'reason': 'illegal',
'illegal_category': 'animal_welfare',
'illegal_subcategory': 'other',
},
)
assert response.status_code == 201
@ -1458,6 +1668,7 @@ class RatingAbuseViewSetTestBase:
'rating': str(target_rating.pk),
'reason': 'illegal',
'illegal_category': 'animal_welfare',
'illegal_subcategory': 'other',
},
)
assert response.status_code == 201
@ -1495,6 +1706,7 @@ class RatingAbuseViewSetTestBase:
'message': 'abuse!',
'reason': 'illegal',
'illegal_category': 'animal_welfare',
'illegal_subcategory': 'other',
},
REMOTE_ADDR='123.45.67.89',
HTTP_X_FORWARDED_FOR=f'123.45.67.89, {get_random_ip()}',
@ -1508,6 +1720,7 @@ class RatingAbuseViewSetTestBase:
'message': 'abuse!',
'reason': 'illegal',
'illegal_category': 'animal_welfare',
'illegal_subcategory': 'other',
},
REMOTE_ADDR='123.45.67.89',
HTTP_X_FORWARDED_FOR=f'123.45.67.89, {get_random_ip()}',
@ -1525,6 +1738,7 @@ class RatingAbuseViewSetTestBase:
'message': 'abuse!',
'reason': 'illegal',
'illegal_category': 'animal_welfare',
'illegal_subcategory': 'other',
},
REMOTE_ADDR='123.45.67.89',
HTTP_X_COUNTRY_CODE='YY',
@ -1572,6 +1786,7 @@ class RatingAbuseViewSetTestBase:
'reason': 'illegal',
'lang': 'Lô-käl',
'illegal_category': 'animal_welfare',
'illegal_subcategory': 'other',
},
REMOTE_ADDR='123.45.67.89',
)
@ -1581,6 +1796,46 @@ class RatingAbuseViewSetTestBase:
self.check_report(report, f'Abuse Report for Rating {target_rating.pk}')
assert report.application_locale == 'Lô-käl'
def test_reject_illegal_category_when_reason_is_not_illegal(self):
target_rating = Rating.objects.create(
addon=addon_factory(), user=user_factory(), body='Booh', rating=1
)
response = self.client.post(
self.url,
data={
'rating': str(target_rating.pk),
'reason': 'feedback_spam',
'illegal_category': 'other',
},
)
assert response.status_code == 400
assert json.loads(response.content) == {
'illegal_category': [
'This value must be omitted or set to "null" when the `reason` is '
'not "illegal".'
],
}
def test_reject_illegal_subcategory_when_reason_is_not_illegal(self):
target_rating = Rating.objects.create(
addon=addon_factory(), user=user_factory(), body='Booh', rating=1
)
response = self.client.post(
self.url,
data={
'rating': str(target_rating.pk),
'reason': 'feedback_spam',
'illegal_subcategory': 'other',
},
)
assert response.status_code == 400
assert json.loads(response.content) == {
'illegal_subcategory': [
'This value must be omitted or set to "null" when the `reason` is '
'not "illegal".'
],
}
def test_illegal_category_required_when_reason_is_illegal(self):
target_rating = Rating.objects.create(
addon=addon_factory(), user=user_factory(), body='Booh', rating=1
@ -1627,6 +1882,86 @@ class RatingAbuseViewSetTestBase:
'illegal_category': ['This field may not be null.']
}
def test_illegal_subcategory_required_when_reason_is_illegal(self):
target_rating = Rating.objects.create(
addon=addon_factory(), user=user_factory(), body='Booh', rating=1
)
response = self.client.post(
self.url,
data={
'rating': str(target_rating.pk),
'reason': 'illegal',
'illegal_category': 'animal_welfare',
},
)
assert response.status_code == 400
assert json.loads(response.content) == {
'illegal_subcategory': ['This field is required.']
}
def test_illegal_subcategory_cannot_be_blank_when_reason_is_illegal(self):
target_rating = Rating.objects.create(
addon=addon_factory(), user=user_factory(), body='Booh', rating=1
)
response = self.client.post(
self.url,
data={
'rating': str(target_rating.pk),
'reason': 'illegal',
'illegal_category': 'animal_welfare',
'illegal_subcategory': '',
},
)
assert response.status_code == 400
assert json.loads(response.content) == {
'illegal_subcategory': ['"" is not a valid choice.']
}
def test_illegal_subcategory_cannot_be_null_when_reason_is_illegal(self):
target_rating = Rating.objects.create(
addon=addon_factory(),
user=user_factory(),
body='Booh',
rating=1,
)
response = self.client.post(
self.url,
data={
'rating': str(target_rating.pk),
'reason': 'illegal',
'illegal_category': 'animal_welfare',
'illegal_subcategory': None,
},
)
assert response.status_code == 400
assert json.loads(response.content) == {
'illegal_subcategory': ['This field may not be null.']
}
def test_illegal_subcategory_depends_on_category(self):
target_rating = Rating.objects.create(
addon=addon_factory(),
user=user_factory(),
body='Booh',
rating=1,
)
response = self.client.post(
self.url,
data={
'rating': str(target_rating.pk),
'reason': 'illegal',
'illegal_category': 'animal_welfare',
'illegal_subcategory': 'biometric_data_breach',
},
)
assert response.status_code == 400
assert json.loads(response.content) == {
'illegal_subcategory': [
'This value cannot be used in combination with the supplied '
'`illegal_category`.'
]
}
class TestRatingAbuseViewSetLoggedOut(RatingAbuseViewSetTestBase, TestCase):
def check_reporter(self, report):
@ -1656,6 +1991,7 @@ class TestRatingAbuseViewSetLoggedIn(RatingAbuseViewSetTestBase, TestCase):
'message': 'abuse!',
'reason': 'illegal',
'illegal_category': 'animal_welfare',
'illegal_subcategory': 'other',
},
REMOTE_ADDR='123.45.67.89',
HTTP_X_FORWARDED_FOR=f'123.45.67.89, {get_random_ip()}',
@ -1672,6 +2008,7 @@ class TestRatingAbuseViewSetLoggedIn(RatingAbuseViewSetTestBase, TestCase):
'message': 'abuse!',
'reason': 'illegal',
'illegal_category': 'animal_welfare',
'illegal_subcategory': 'other',
},
REMOTE_ADDR='123.45.67.89',
HTTP_X_FORWARDED_FOR=f'123.45.67.89, {get_random_ip()}',
@ -1759,6 +2096,7 @@ class CollectionAbuseViewSetTestBase:
'message': '',
'reason': 'illegal',
'illegal_category': 'animal_welfare',
'illegal_subcategory': 'other',
},
)
assert response.status_code == 201
@ -1771,6 +2109,7 @@ class CollectionAbuseViewSetTestBase:
'collection': str(target_collection.pk),
'reason': 'illegal',
'illegal_category': 'animal_welfare',
'illegal_subcategory': 'other',
},
)
assert response.status_code == 201
@ -1804,6 +2143,7 @@ class CollectionAbuseViewSetTestBase:
'message': 'abuse!',
'reason': 'illegal',
'illegal_category': 'animal_welfare',
'illegal_subcategory': 'other',
},
REMOTE_ADDR='123.45.67.89',
HTTP_X_FORWARDED_FOR=f'123.45.67.89, {get_random_ip()}',
@ -1817,6 +2157,7 @@ class CollectionAbuseViewSetTestBase:
'message': 'abuse!',
'reason': 'illegal',
'illegal_category': 'animal_welfare',
'illegal_subcategory': 'other',
},
REMOTE_ADDR='123.45.67.89',
HTTP_X_FORWARDED_FOR=f'123.45.67.89, {get_random_ip()}',
@ -1832,6 +2173,7 @@ class CollectionAbuseViewSetTestBase:
'message': 'abuse!',
'reason': 'illegal',
'illegal_category': 'animal_welfare',
'illegal_subcategory': 'other',
},
REMOTE_ADDR='123.45.67.89',
HTTP_X_COUNTRY_CODE='YY',
@ -1887,6 +2229,42 @@ class CollectionAbuseViewSetTestBase:
self.check_report(report, f'Abuse Report for Collection {target_collection.pk}')
assert report.application_locale == 'Lô-käl'
def test_reject_illegal_category_when_reason_is_not_illegal(self):
target_collection = collection_factory()
response = self.client.post(
self.url,
data={
'collection': str(target_collection.pk),
'reason': 'feedback_spam',
'illegal_category': 'other',
},
)
assert response.status_code == 400
assert json.loads(response.content) == {
'illegal_category': [
'This value must be omitted or set to "null" when the `reason` is '
'not "illegal".'
],
}
def test_reject_illegal_subcategory_when_reason_is_not_illegal(self):
target_collection = collection_factory()
response = self.client.post(
self.url,
data={
'collection': str(target_collection.pk),
'reason': 'feedback_spam',
'illegal_subcategory': 'other',
},
)
assert response.status_code == 400
assert json.loads(response.content) == {
'illegal_subcategory': [
'This value must be omitted or set to "null" when the `reason` is '
'not "illegal".'
],
}
def test_illegal_category_required_when_reason_is_illegal(self):
target_collection = collection_factory()
response = self.client.post(
@ -1928,6 +2306,72 @@ class CollectionAbuseViewSetTestBase:
'illegal_category': ['This field may not be null.']
}
def test_illegal_subcategory_required_when_reason_is_illegal(self):
target_collection = collection_factory()
response = self.client.post(
self.url,
data={
'collection': str(target_collection.pk),
'reason': 'illegal',
'illegal_category': 'animal_welfare',
},
)
assert response.status_code == 400
assert json.loads(response.content) == {
'illegal_subcategory': ['This field is required.']
}
def test_illegal_subcategory_cannot_be_blank_when_reason_is_illegal(self):
target_collection = collection_factory()
response = self.client.post(
self.url,
data={
'collection': str(target_collection.pk),
'reason': 'illegal',
'illegal_category': 'animal_welfare',
'illegal_subcategory': '',
},
)
assert response.status_code == 400
assert json.loads(response.content) == {
'illegal_subcategory': ['"" is not a valid choice.']
}
def test_illegal_subcategory_cannot_be_null_when_reason_is_illegal(self):
target_collection = collection_factory()
response = self.client.post(
self.url,
data={
'collection': str(target_collection.pk),
'reason': 'illegal',
'illegal_category': 'animal_welfare',
'illegal_subcategory': None,
},
)
assert response.status_code == 400
assert json.loads(response.content) == {
'illegal_subcategory': ['This field may not be null.']
}
def test_illegal_subcategory_depends_on_category(self):
target_collection = collection_factory()
response = self.client.post(
self.url,
data={
'collection': str(target_collection.pk),
'reason': 'illegal',
'illegal_category': 'animal_welfare',
'illegal_subcategory': 'biometric_data_breach',
},
)
assert response.status_code == 400
assert json.loads(response.content) == {
'illegal_subcategory': [
'This value cannot be used in combination with the supplied '
'`illegal_category`.'
]
}
class TestCollectionAbuseViewSetLoggedOut(CollectionAbuseViewSetTestBase, TestCase):
def check_reporter(self, report):
@ -1955,6 +2399,7 @@ class TestCollectionAbuseViewSetLoggedIn(CollectionAbuseViewSetTestBase, TestCas
'message': 'abuse!',
'reason': 'illegal',
'illegal_category': 'animal_welfare',
'illegal_subcategory': 'other',
},
REMOTE_ADDR='123.45.67.89',
HTTP_X_FORWARDED_FOR=f'123.45.67.89, {get_random_ip()}',
@ -1971,6 +2416,7 @@ class TestCollectionAbuseViewSetLoggedIn(CollectionAbuseViewSetTestBase, TestCas
'message': 'abuse!',
'reason': 'illegal',
'illegal_category': 'animal_welfare',
'illegal_subcategory': 'other',
},
REMOTE_ADDR='123.45.67.89',
HTTP_X_FORWARDED_FOR=f'123.45.67.89, {get_random_ip()}',

Просмотреть файл

@ -103,3 +103,234 @@ ILLEGAL_CATEGORIES = APIChoicesWithNone(
('VIOLENCE', 14, 'Violence'),
('OTHER', 15, 'Other'),
)
ILLEGAL_SUBCATEGORIES = APIChoicesWithNone(
('OTHER', 1, 'Something else'),
# CONSUMER_INFORMATION
(
'INSUFFICIENT_INFORMATION_ON_TRADERS',
2,
'Insufficient information on traders',
),
('NONCOMPLIANCE_PRICING', 3, 'Non-compliance with pricing regulations'),
(
'HIDDEN_ADVERTISEMENT',
4,
'Hidden advertisement or commercial communication, including by ' 'influencers',
),
(
'MISLEADING_INFO_GOODS_SERVICES',
5,
'Misleading information about the characteristics of the goods and ' 'services',
),
(
'MISLEADING_INFO_CONSUMER_RIGHTS',
6,
'Misleading information about the consumers rights',
),
# DATA_PROTECTION_AND_PRIVACY_VIOLATIONS
('BIOMETRIC_DATA_BREACH', 7, 'Biometric data breach'),
('MISSING_PROCESSING_GROUND', 8, 'Missing processing ground for data'),
('RIGHT_TO_BE_FORGOTTEN', 9, 'Right to be forgotten'),
('DATA_FALSIFICATION', 10, 'Data falsification'),
# ILLEGAL_OR_HARMFUL_SPEECH
('DEFAMATION', 11, 'Defamation'),
('DISCRIMINATION', 12, 'Discrimination'),
(
'HATE_SPEECH',
13,
'Illegal incitement to violence and hatred based on protected '
'characteristics (hate speech)',
),
# INTELLECTUAL_PROPERTY_INFRINGEMENTS
#
# Note: `KEYWORD_COPYRIGHT_INFRINGEMENT` and
# `KEYWORD_TRADEMARK_INFRINGEMENT` are currently not defined.
('DESIGN_INFRINGEMENT', 14, 'Design infringements'),
(
'GEOGRAPHIC_INDICATIONS_INFRINGEMENT',
15,
'Geographical indications infringements',
),
('PATENT_INFRINGEMENT', 16, 'Patent infringements'),
('TRADE_SECRET_INFRINGEMENT', 17, 'Trade secret infringements'),
# NEGATIVE_EFFECTS_ON_CIVIC_DISCOURSE_OR_ELECTIONS
(
'VIOLATION_EU_LAW',
18,
'Violation of EU law relevant to civic discourse or elections',
),
(
'VIOLATION_NATIONAL_LAW',
19,
'Violation of national law relevant to civic discourse or elections',
),
(
'MISINFORMATION_DISINFORMATION_DISINFORMATION',
20,
'Misinformation, disinformation, foreign information manipulation '
'and interference',
),
# NON_CONSENSUAL_BEHAVIOUR
('NON_CONSENSUAL_IMAGE_SHARING', 21, 'Non-consensual image sharing'),
(
'NON_CONSENSUAL_ITEMS_DEEPFAKE',
22,
'Non-consensual items containing deepfake or similar technology using '
"a third party's features",
),
('ONLINE_BULLYING_INTIMIDATION', 23, 'Online bullying/intimidation'),
('STALKING', 24, 'Stalking'),
# PORNOGRAPHY_OR_SEXUALIZED_CONTENT
('ADULT_SEXUAL_MATERIAL', 25, 'Adult sexual material'),
(
'IMAGE_BASED_SEXUAL_ABUSE',
26,
'Image-based sexual abuse (excluding content depicting minors)',
),
# PROTECTION_OF_MINORS
#
# Note: `KEYWORD_UNSAFE_CHALLENGES` is not defined on purpose.
(
'AGE_SPECIFIC_RESTRICTIONS_MINORS',
27,
'Age-specific restrictions concerning minors',
),
('CHILD_SEXUAL_ABUSE_MATERIAL', 28, 'Child sexual abuse material'),
(
'GROOMING_SEXUAL_ENTICEMENT_MINORS',
29,
'Grooming/sexual enticement of minors',
),
# RISK_FOR_PUBLIC_SECURITY
('ILLEGAL_ORGANIZATIONS', 30, 'Illegal organizations'),
('RISK_ENVIRONMENTAL_DAMAGE', 31, 'Risk for environmental damage'),
('RISK_PUBLIC_HEALTH', 32, 'Risk for public health'),
('TERRORIST_CONTENT', 33, 'Terrorist content'),
# SCAMS_AND_FRAUD
('INAUTHENTIC_ACCOUNTS', 34, 'Inauthentic accounts'),
('INAUTHENTIC_LISTINGS', 35, 'Inauthentic listings'),
('INAUTHENTIC_USER_REVIEWS', 36, 'Inauthentic user reviews'),
('IMPERSONATION_ACCOUNT_HIJACKING', 37, 'Impersonation or account hijacking'),
('PHISHING', 38, 'Phishing'),
('PYRAMID_SCHEMES', 39, 'Pyramid schemes'),
# SELF_HARM
(
'CONTENT_PROMOTING_EATING_DISORDERS',
40,
'Content promoting eating disorders',
),
('SELF_MUTILATION', 41, 'Self-mutilation'),
('SUICIDE', 42, 'Suicide'),
# UNSAFE_AND_PROHIBITED_PRODUCTS
('PROHIBITED_PRODUCTS', 43, 'Prohibited or restricted products'),
('UNSAFE_PRODUCTS', 44, 'Unsafe or non-compliant products'),
# VIOLENCE
('COORDINATED_HARM', 45, 'Coordinated harm'),
('GENDER_BASED_VIOLENCE', 46, 'Gender-based violence'),
('HUMAN_EXPLOITATION', 47, 'Human exploitation'),
('HUMAN_TRAFFICKING', 48, 'Human trafficking'),
(
'INCITEMENT_VIOLENCE_HATRED',
49,
'General calls or incitement to violence and/or hatred',
),
# ANIMAL_WELFARE
#
# Note: `KEYWORD_ANIMAL_HARM` and `KEYWORD_UNLAWFUL_SALE_ANIMALS` are
# curently not defined.
)
ILLEGAL_SUBCATEGORIES_BY_CATEGORY = {
ILLEGAL_CATEGORIES.ANIMAL_WELFARE: [
ILLEGAL_SUBCATEGORIES.OTHER,
],
ILLEGAL_CATEGORIES.CONSUMER_INFORMATION: [
ILLEGAL_SUBCATEGORIES.INSUFFICIENT_INFORMATION_ON_TRADERS,
ILLEGAL_SUBCATEGORIES.NONCOMPLIANCE_PRICING,
ILLEGAL_SUBCATEGORIES.HIDDEN_ADVERTISEMENT,
ILLEGAL_SUBCATEGORIES.MISLEADING_INFO_GOODS_SERVICES,
ILLEGAL_SUBCATEGORIES.MISLEADING_INFO_CONSUMER_RIGHTS,
ILLEGAL_SUBCATEGORIES.OTHER,
],
ILLEGAL_CATEGORIES.DATA_PROTECTION_AND_PRIVACY_VIOLATIONS: [
ILLEGAL_SUBCATEGORIES.BIOMETRIC_DATA_BREACH,
ILLEGAL_SUBCATEGORIES.MISSING_PROCESSING_GROUND,
ILLEGAL_SUBCATEGORIES.RIGHT_TO_BE_FORGOTTEN,
ILLEGAL_SUBCATEGORIES.DATA_FALSIFICATION,
ILLEGAL_SUBCATEGORIES.OTHER,
],
ILLEGAL_CATEGORIES.ILLEGAL_OR_HARMFUL_SPEECH: [
ILLEGAL_SUBCATEGORIES.DEFAMATION,
ILLEGAL_SUBCATEGORIES.DISCRIMINATION,
ILLEGAL_SUBCATEGORIES.HATE_SPEECH,
ILLEGAL_SUBCATEGORIES.OTHER,
],
ILLEGAL_CATEGORIES.INTELLECTUAL_PROPERTY_INFRINGEMENTS: [
ILLEGAL_SUBCATEGORIES.DESIGN_INFRINGEMENT,
ILLEGAL_SUBCATEGORIES.GEOGRAPHIC_INDICATIONS_INFRINGEMENT,
ILLEGAL_SUBCATEGORIES.PATENT_INFRINGEMENT,
ILLEGAL_SUBCATEGORIES.TRADE_SECRET_INFRINGEMENT,
ILLEGAL_SUBCATEGORIES.OTHER,
],
ILLEGAL_CATEGORIES.NEGATIVE_EFFECTS_ON_CIVIC_DISCOURSE_OR_ELECTIONS: [
ILLEGAL_SUBCATEGORIES.VIOLATION_EU_LAW,
ILLEGAL_SUBCATEGORIES.VIOLATION_NATIONAL_LAW,
ILLEGAL_SUBCATEGORIES.MISINFORMATION_DISINFORMATION_DISINFORMATION,
ILLEGAL_SUBCATEGORIES.OTHER,
],
ILLEGAL_CATEGORIES.NON_CONSENSUAL_BEHAVIOUR: [
ILLEGAL_SUBCATEGORIES.NON_CONSENSUAL_IMAGE_SHARING,
ILLEGAL_SUBCATEGORIES.NON_CONSENSUAL_ITEMS_DEEPFAKE,
ILLEGAL_SUBCATEGORIES.ONLINE_BULLYING_INTIMIDATION,
ILLEGAL_SUBCATEGORIES.STALKING,
ILLEGAL_SUBCATEGORIES.OTHER,
],
ILLEGAL_CATEGORIES.PORNOGRAPHY_OR_SEXUALIZED_CONTENT: [
ILLEGAL_SUBCATEGORIES.ADULT_SEXUAL_MATERIAL,
ILLEGAL_SUBCATEGORIES.IMAGE_BASED_SEXUAL_ABUSE,
ILLEGAL_SUBCATEGORIES.OTHER,
],
ILLEGAL_CATEGORIES.PROTECTION_OF_MINORS: [
ILLEGAL_SUBCATEGORIES.AGE_SPECIFIC_RESTRICTIONS_MINORS,
ILLEGAL_SUBCATEGORIES.CHILD_SEXUAL_ABUSE_MATERIAL,
ILLEGAL_SUBCATEGORIES.GROOMING_SEXUAL_ENTICEMENT_MINORS,
ILLEGAL_SUBCATEGORIES.OTHER,
],
ILLEGAL_CATEGORIES.RISK_FOR_PUBLIC_SECURITY: [
ILLEGAL_SUBCATEGORIES.ILLEGAL_ORGANIZATIONS,
ILLEGAL_SUBCATEGORIES.RISK_ENVIRONMENTAL_DAMAGE,
ILLEGAL_SUBCATEGORIES.RISK_PUBLIC_HEALTH,
ILLEGAL_SUBCATEGORIES.TERRORIST_CONTENT,
ILLEGAL_SUBCATEGORIES.OTHER,
],
ILLEGAL_CATEGORIES.SCAMS_AND_FRAUD: [
ILLEGAL_SUBCATEGORIES.INAUTHENTIC_ACCOUNTS,
ILLEGAL_SUBCATEGORIES.INAUTHENTIC_LISTINGS,
ILLEGAL_SUBCATEGORIES.INAUTHENTIC_USER_REVIEWS,
ILLEGAL_SUBCATEGORIES.IMPERSONATION_ACCOUNT_HIJACKING,
ILLEGAL_SUBCATEGORIES.PHISHING,
ILLEGAL_SUBCATEGORIES.PYRAMID_SCHEMES,
ILLEGAL_SUBCATEGORIES.OTHER,
],
ILLEGAL_CATEGORIES.SELF_HARM: [
ILLEGAL_SUBCATEGORIES.CONTENT_PROMOTING_EATING_DISORDERS,
ILLEGAL_SUBCATEGORIES.SELF_MUTILATION,
ILLEGAL_SUBCATEGORIES.SUICIDE,
ILLEGAL_SUBCATEGORIES.OTHER,
],
ILLEGAL_CATEGORIES.UNSAFE_AND_PROHIBITED_PRODUCTS: [
ILLEGAL_SUBCATEGORIES.PROHIBITED_PRODUCTS,
ILLEGAL_SUBCATEGORIES.UNSAFE_PRODUCTS,
ILLEGAL_SUBCATEGORIES.OTHER,
],
ILLEGAL_CATEGORIES.VIOLENCE: [
ILLEGAL_SUBCATEGORIES.COORDINATED_HARM,
ILLEGAL_SUBCATEGORIES.GENDER_BASED_VIOLENCE,
ILLEGAL_SUBCATEGORIES.HUMAN_EXPLOITATION,
ILLEGAL_SUBCATEGORIES.HUMAN_TRAFFICKING,
ILLEGAL_SUBCATEGORIES.INCITEMENT_VIOLENCE_HATRED,
ILLEGAL_SUBCATEGORIES.OTHER,
],
ILLEGAL_CATEGORIES.OTHER: [
ILLEGAL_SUBCATEGORIES.OTHER,
],
}