Add new illegal_category field in abuse reports (#22388)

This commit is contained in:
William Durand 2024-06-24 10:33:57 +02:00 коммит произвёл GitHub
Родитель 63678a1020
Коммит bbb95c3ba3
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: B5690EEEBB952194
13 изменённых файлов: 563 добавлений и 19 удалений

Просмотреть файл

@ -56,6 +56,7 @@ to if necessary.
:<json string|null reason: The reason for the report. The accepted values are documented in the :ref:`table below <abuse-addon-reason-parameter>`.
:<json string|null reporter_name: The provided name of the reporter, if not authenticated.
:<json string|null reporter_email: The provided email of the reporter, if not authenticated.
:<json string|null illegal_category: The type of illegal content - only required when the reason is set to ``illegal``. The accepted values are documented in this :ref:`table <abuse-report-illegal_category-parameter>`.
:>json object|null reporter: The user who submitted the report, if authenticated.
:>json int reporter.id: The id of the user who submitted the report.
:>json string reporter.name: The name of the user who submitted the report.
@ -86,6 +87,7 @@ to if necessary.
:>json string|null operating_system: The client's operating system.
:>json string|null operating_system_version: The client's operating system version.
:>json string|null reason: The reason for the report.
:>json string|null illegal_category: The type of illegal content - only defined when the reason is set to ``illegal``.
.. _abuse-report_entry_point-parameter:
@ -228,6 +230,30 @@ to if necessary.
both Offending content is in both locations
=========================== ===================================================
.. _abuse-report-illegal_category-parameter:
Accepted values for the ``illegal_category`` parameter:
================================================ ================================================
Value Description
================================================ ================================================
animal_welfare Animal welfare
consumer_information Consumer information infringements
data_protection_and_privacy_violations Data protection and privacy violations
illegal_or_harmful_speech Illegal or harmful speech
intellectual_property_infringements Intellectual property infringements
negative_effects_on_civic_discourse_or_elections Negative effects on civic discourse or elections
non_consensual_behaviour Non-consensual behavior
pornography_or_sexualized_content Pornography or sexualized content
protection_of_minors Protection of minors
risk_for_public_security Risk for public security
scams_and_fraud Scams or fraud
self_harm Self-harm
unsafe_and_prohibited_products Unsafe, non-compliant, or prohibited products
violence Violence
other Other
================================================ ================================================
------------------------------
Submitting a user abuse report
@ -249,6 +275,7 @@ so reports can be responded to if necessary.
:<json string|null reason: The reason for the report. The accepted values are documented in the :ref:`table below <abuse-user-reason-parameter>`.
:<json string|null reporter_name: The provided name of the reporter, if not authenticated.
:<json string|null reporter_email: The provided email of the reporter, if not authenticated.
:<json string|null illegal_category: The type of illegal content - only required when the reason is set to ``illegal``. The accepted values are documented in this :ref:`table <abuse-report-illegal_category-parameter>`.
:>json object|null reporter: The user who submitted the report, if authenticated.
:>json int reporter.id: The id of the user who submitted the report.
:>json string reporter.name: The name of the user who submitted the report.
@ -263,6 +290,7 @@ so reports can be responded to if necessary.
:>json string user.username: The username of the user reported.
:>json string message: The body/content of the abuse report.
:>json string|null lang: The language code of the locale used by the client for the application.
:>json string|null illegal_category: The type of illegal content - only defined when the reason is set to ``illegal``.
.. _abuse-user-reason-parameter:
@ -298,6 +326,7 @@ so reports can be responded to if necessary.
:<json string|null reason: The reason for the report. The accepted values are documented in the :ref:`table below <abuse-rating-reason-parameter>`.
:<json string|null reporter_name: The provided name of the reporter, if not authenticated.
:<json string|null reporter_email: The provided email of the reporter, if not authenticated.
:<json string|null illegal_category: The type of illegal content - only required when the reason is set to ``illegal``. The accepted values are documented in this :ref:`table <abuse-report-illegal_category-parameter>`.
:>json object|null reporter: The user who submitted the report, if authenticated.
:>json int reporter.id: The id of the user who submitted the report.
:>json string reporter.name: The name of the user who submitted the report.
@ -310,6 +339,7 @@ so reports can be responded to if necessary.
:>json string message: The body/content of the abuse report.
:>json string|null lang: The language code of the locale used by the client for the application.
:>json string|null reason: The reason for the report.
:>json string|null illegal_category: The type of illegal content - only defined when the reason is set to ``illegal``.
.. _abuse-rating-reason-parameter:
@ -345,6 +375,7 @@ so reports can be responded to if necessary.
:<json string|null reason: The reason for the report. The accepted values are documented in the :ref:`table below <abuse-collection-reason-parameter>`.
:<json string|null reporter_name: The provided name of the reporter, if not authenticated.
:<json string|null reporter_email: The provided email of the reporter, if not authenticated.
:<json string|null illegal_category: The type of illegal content - only required when the reason is set to ``illegal``. The accepted values are documented in this :ref:`table <abuse-report-illegal_category-parameter>`.
:>json object|null reporter: The user who submitted the report, if authenticated.
:>json int reporter.id: The id of the user who submitted the report.
:>json string reporter.name: The name of the user who submitted the report.
@ -356,6 +387,7 @@ so reports can be responded to if necessary.
:>json int collection.id: The id of the collection reported.
:>json string message: The body/content of the abuse report.
:>json string|null lang: The language code of the locale used by the client for the application.
:>json string|null illegal_category: The type of illegal content - only defined when the reason is set to ``illegal``.
.. _abuse-collection-reason-parameter:

Просмотреть файл

@ -469,6 +469,7 @@ These are `v5` specific changes - `v4` changes apply also.
* 2023-11-02: removed ``application`` from categories endpoint, flattened ``categories`` in addon detail/search endpoint. https://github.com/mozilla/addons-server/issues/5989
* 2023-11-09: removed reviewers /enable and /disable endpoints. https://github.com/mozilla/addons-server/issues/21356
* 2023-12-07: added ``lang`` parameter to all /abuse/report/ endpoints. https://github.com/mozilla/addons-server/issues/21529
* 2024-06-20: added ``illegal_category`` parameter to all /abuse/report/ endpoints. https://github.com/mozilla/addons/issues/14870
.. _`#11380`: https://github.com/mozilla/addons-server/issues/11380/
.. _`#11379`: https://github.com/mozilla/addons-server/issues/11379/

Просмотреть файл

@ -151,6 +151,7 @@ class AbuseReportAdmin(AMOModelAdmin):
'report_entry_point',
'addon_card',
'location',
'illegal_category',
)
fieldsets = (
('Abuse Report Core Information', {'fields': ('reason', 'message')}),
@ -178,6 +179,7 @@ class AbuseReportAdmin(AMOModelAdmin):
'addon_install_source_url',
'report_entry_point',
'location',
'illegal_category',
)
},
),

Просмотреть файл

@ -516,19 +516,28 @@ class CinderReport(CinderEntity):
return self.get_str(self.abuse_report.id)
def get_attributes(self):
considers_illegal = (
self.abuse_report.reason == self.abuse_report.REASONS.ILLEGAL
)
return {
'id': self.id,
'created': self.get_str(self.abuse_report.created),
'reason': self.abuse_report.get_reason_display()
'reason': (
self.abuse_report.get_reason_display()
if self.abuse_report.reason
else None,
else None
),
'message': self.get_str(self.abuse_report.message),
'locale': self.abuse_report.application_locale,
# We need a boolean to expose specifically if the reporter
# considered the content illegal, as that needs to be reflected in
# the SOURCE_TYPE in the transparency database.
'considers_illegal': self.abuse_report.reason
== self.abuse_report.REASONS.ILLEGAL,
'considers_illegal': considers_illegal,
'illegal_category': (
self.abuse_report.illegal_category_cinder_value
if considers_illegal
else None
),
}
def report(self, *args, **kwargs):

Просмотреть файл

@ -0,0 +1,41 @@
# Generated by Django 4.2.13 on 2024-06-20 07:02
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('abuse', '0032_cinderpolicy_default_cinder_action_and_more'),
]
operations = [
migrations.AddField(
model_name='abusereport',
name='illegal_category',
field=models.PositiveSmallIntegerField(
blank=True,
choices=[
(None, 'None'),
(1, 'Animal welfare'),
(2, 'Consumer information infringements'),
(3, 'Data protection and privacy violations'),
(4, 'Illegal or harmful speech'),
(5, 'Intellectual property infringements'),
(6, 'Negative effects on civic discourse or elections'),
(7, 'Non-consensual behavior'),
(8, 'Pornography or sexualized content'),
(9, 'Protection of minors'),
(10, 'Risk for public security'),
(11, 'Scams or fraud'),
(12, 'Self-harm'),
(13, 'Unsafe, non-compliant, or prohibited products'),
(14, 'Violence'),
(15, 'Other'),
],
default=None,
help_text='Type of illegal content',
null=True,
),
),
]

Просмотреть файл

@ -14,7 +14,11 @@ from olympia.amo.models import BaseQuerySet, ManagerBase, ModelBase
from olympia.amo.templatetags.jinja_helpers import absolutify
from olympia.api.utils import APIChoicesWithNone
from olympia.bandwagon.models import Collection
from olympia.constants.abuse import APPEAL_EXPIRATION_DAYS, DECISION_ACTIONS
from olympia.constants.abuse import (
APPEAL_EXPIRATION_DAYS,
DECISION_ACTIONS,
ILLEGAL_CATEGORIES,
)
from olympia.ratings.models import Rating
from olympia.users.models import UserProfile
from olympia.versions.models import VersionReviewerFlags
@ -627,6 +631,13 @@ class AbuseReport(ModelBase):
on_delete=models.SET_NULL,
related_name='appellants',
)
illegal_category = models.PositiveSmallIntegerField(
default=None,
choices=ILLEGAL_CATEGORIES.choices,
blank=True,
null=True,
help_text='Type of illegal content',
)
objects = AbuseReportManager()
@ -723,6 +734,14 @@ class AbuseReport(ModelBase):
and self.location in AbuseReport.LOCATION.REVIEWER_HANDLED
)
@property
def illegal_category_cinder_value(self):
if not self.illegal_category:
return None
# We should send "normalized" constants to Cinder.
const = ILLEGAL_CATEGORIES.for_value(self.illegal_category).constant
return f'STATEMENT_CATEGORY_{const}'
class CantBeAppealed(Exception):
pass

Просмотреть файл

@ -10,6 +10,7 @@ from olympia.accounts.serializers import BaseUserSerializer
from olympia.api.exceptions import UnavailableForLegalReasons
from olympia.api.fields import ReverseChoiceField
from olympia.api.serializers import AMOModelSerializer
from olympia.constants.abuse import ILLEGAL_CATEGORIES
from .models import AbuseReport
from .tasks import report_to_cinder
@ -51,6 +52,11 @@ class BaseAbuseReportSerializer(AMOModelSerializer):
'The language code of the locale used by the client for the application.'
),
)
illegal_category = ReverseChoiceField(
choices=list(ILLEGAL_CATEGORIES.api_choices),
required=False,
allow_null=True,
)
class Meta:
model = AbuseReport
@ -61,6 +67,7 @@ class BaseAbuseReportSerializer(AMOModelSerializer):
'reporter',
'reporter_name',
'reporter_email',
'illegal_category',
)
def validate(self, data):
@ -76,6 +83,17 @@ class BaseAbuseReportSerializer(AMOModelSerializer):
else:
msg = serializers.CharField.default_error_messages['blank']
raise serializers.ValidationError({'message': [msg]})
# When the reason is "illegal", the `illegal_category` field is
# required.
if data.get('reason') == AbuseReport.REASONS.ILLEGAL:
if 'illegal_category' not in data:
msg = serializers.Field.default_error_messages['required']
raise serializers.ValidationError({'illegal_category': [msg]})
elif data.get('illegal_category') is None:
msg = serializers.Field.default_error_messages['null']
raise serializers.ValidationError({'illegal_category': [msg]})
return data
def validate_target(self, data, target_name):

Просмотреть файл

@ -22,7 +22,7 @@ from olympia.amo.tests import (
)
from olympia.amo.tests.test_helpers import get_image_path
from olympia.bandwagon.models import Collection, CollectionAddon
from olympia.constants.abuse import DECISION_ACTIONS
from olympia.constants.abuse import DECISION_ACTIONS, ILLEGAL_CATEGORIES
from olympia.constants.promoted import NOT_PROMOTED, NOTABLE, RECOMMENDED
from olympia.ratings.models import Rating
from olympia.reviewers.models import NeedsHumanReview
@ -241,6 +241,7 @@ class TestCinderAddon(BaseTestCinderCase, TestCase):
'message': encoded_message,
'reason': None,
'considers_illegal': False,
'illegal_category': None,
},
'entity_type': 'amo_report',
}
@ -274,6 +275,7 @@ class TestCinderAddon(BaseTestCinderCase, TestCase):
'message': encoded_message,
'reason': None,
'considers_illegal': False,
'illegal_category': None,
},
'entity_type': 'amo_report',
},
@ -319,6 +321,7 @@ class TestCinderAddon(BaseTestCinderCase, TestCase):
'message': encoded_message,
'reason': None,
'considers_illegal': False,
'illegal_category': None,
},
'entity_type': 'amo_report',
},
@ -400,6 +403,7 @@ class TestCinderAddon(BaseTestCinderCase, TestCase):
'message': encoded_message,
'reason': None,
'considers_illegal': False,
'illegal_category': None,
},
'entity_type': 'amo_report',
}
@ -465,6 +469,7 @@ class TestCinderAddon(BaseTestCinderCase, TestCase):
'message': encoded_message,
'reason': None,
'considers_illegal': False,
'illegal_category': None,
},
'entity_type': 'amo_report',
}
@ -517,6 +522,7 @@ class TestCinderAddon(BaseTestCinderCase, TestCase):
'message': message,
'reason': None,
'considers_illegal': False,
'illegal_category': None,
},
'entity_type': 'amo_report',
},
@ -564,6 +570,7 @@ class TestCinderAddon(BaseTestCinderCase, TestCase):
'message': message,
'reason': None,
'considers_illegal': False,
'illegal_category': None,
},
'entity_type': 'amo_report',
},
@ -634,6 +641,7 @@ class TestCinderAddon(BaseTestCinderCase, TestCase):
'message': encoded_message,
'reason': None,
'considers_illegal': False,
'illegal_category': None,
},
'entity_type': 'amo_report',
},
@ -744,6 +752,7 @@ class TestCinderAddon(BaseTestCinderCase, TestCase):
'message': encoded_message,
'reason': None,
'considers_illegal': False,
'illegal_category': None,
},
'entity_type': 'amo_report',
},
@ -836,6 +845,7 @@ class TestCinderAddon(BaseTestCinderCase, TestCase):
'message': encoded_message,
'reason': None,
'considers_illegal': False,
'illegal_category': None,
},
'entity_type': 'amo_report',
},
@ -899,6 +909,7 @@ class TestCinderAddon(BaseTestCinderCase, TestCase):
'message': 'report for lots of relationships',
'reason': None,
'considers_illegal': False,
'illegal_category': None,
},
'entity_type': 'amo_report',
},
@ -1339,6 +1350,7 @@ class TestCinderUser(BaseTestCinderCase, TestCase):
'message': encoded_message,
'reason': None,
'considers_illegal': False,
'illegal_category': None,
},
'entity_type': 'amo_report',
}
@ -1372,6 +1384,7 @@ class TestCinderUser(BaseTestCinderCase, TestCase):
'message': encoded_message,
'reason': None,
'considers_illegal': False,
'illegal_category': None,
},
'entity_type': 'amo_report',
},
@ -1417,6 +1430,7 @@ class TestCinderUser(BaseTestCinderCase, TestCase):
'message': encoded_message,
'reason': None,
'considers_illegal': False,
'illegal_category': None,
},
'entity_type': 'amo_report',
},
@ -1484,6 +1498,7 @@ class TestCinderUser(BaseTestCinderCase, TestCase):
'message': encoded_message,
'reason': None,
'considers_illegal': False,
'illegal_category': None,
},
'entity_type': 'amo_report',
},
@ -1558,6 +1573,7 @@ class TestCinderUser(BaseTestCinderCase, TestCase):
'message': encoded_message,
'reason': None,
'considers_illegal': False,
'illegal_category': None,
},
'entity_type': 'amo_report',
},
@ -1609,6 +1625,7 @@ class TestCinderUser(BaseTestCinderCase, TestCase):
'message': encoded_message,
'reason': None,
'considers_illegal': False,
'illegal_category': None,
},
'entity_type': 'amo_report',
},
@ -1704,6 +1721,7 @@ class TestCinderUser(BaseTestCinderCase, TestCase):
'message': encoded_message,
'reason': None,
'considers_illegal': False,
'illegal_category': None,
},
'entity_type': 'amo_report',
}
@ -1784,6 +1802,7 @@ class TestCinderUser(BaseTestCinderCase, TestCase):
'message': 'report for lots of relationships',
'reason': None,
'considers_illegal': False,
'illegal_category': None,
},
'entity_type': 'amo_report',
},
@ -2004,6 +2023,7 @@ class TestCinderRating(BaseTestCinderCase, TestCase):
'message': encoded_message,
'reason': None,
'considers_illegal': False,
'illegal_category': None,
},
'entity_type': 'amo_report',
},
@ -2068,6 +2088,7 @@ class TestCinderRating(BaseTestCinderCase, TestCase):
'message': encoded_message,
'reason': None,
'considers_illegal': False,
'illegal_category': None,
},
'entity_type': 'amo_report',
},
@ -2154,6 +2175,7 @@ class TestCinderRating(BaseTestCinderCase, TestCase):
'message': encoded_message,
'reason': None,
'considers_illegal': False,
'illegal_category': None,
},
'entity_type': 'amo_report',
},
@ -2241,6 +2263,7 @@ class TestCinderCollection(BaseTestCinderCase, TestCase):
'message': encoded_message,
'reason': None,
'considers_illegal': False,
'illegal_category': None,
},
'entity_type': 'amo_report',
},
@ -2310,6 +2333,7 @@ class TestCinderCollection(BaseTestCinderCase, TestCase):
'message': encoded_message,
'reason': None,
'considers_illegal': False,
'illegal_category': None,
},
'entity_type': 'amo_report',
},
@ -2368,6 +2392,7 @@ class TestCinderReport(TestCase):
'message': '',
'reason': "DSA: It violates Mozilla's Add-on Policies",
'considers_illegal': False,
'illegal_category': None,
}
def test_locale_in_attributes(self):
@ -2381,12 +2406,14 @@ class TestCinderReport(TestCase):
'message': '',
'reason': None,
'considers_illegal': False,
'illegal_category': None,
}
def test_considers_illegal(self):
abuse_report = AbuseReport.objects.create(
guid=addon_factory().guid,
reason=AbuseReport.REASONS.ILLEGAL,
illegal_category=ILLEGAL_CATEGORIES.ANIMAL_WELFARE,
)
assert self.cinder_class(abuse_report).get_attributes() == {
'id': str(abuse_report.pk),
@ -2397,4 +2424,5 @@ class TestCinderReport(TestCase):
'DSA: It violates the law or contains content that violates the law'
),
'considers_illegal': True,
'illegal_category': 'STATEMENT_CATEGORY_ANIMAL_WELFARE',
}

Просмотреть файл

@ -21,7 +21,11 @@ from olympia.amo.tests import (
user_factory,
version_review_flags_factory,
)
from olympia.constants.abuse import APPEAL_EXPIRATION_DAYS, DECISION_ACTIONS
from olympia.constants.abuse import (
APPEAL_EXPIRATION_DAYS,
DECISION_ACTIONS,
ILLEGAL_CATEGORIES,
)
from olympia.ratings.models import Rating
from olympia.reviewers.models import NeedsHumanReview
from olympia.versions.models import VersionReviewerFlags
@ -260,6 +264,43 @@ class TestAbuse(TestCase):
(3, 'both'),
)
assert ILLEGAL_CATEGORIES.choices == (
(None, 'None'),
(1, 'Animal welfare'),
(2, 'Consumer information infringements'),
(3, 'Data protection and privacy violations'),
(4, 'Illegal or harmful speech'),
(5, 'Intellectual property infringements'),
(6, 'Negative effects on civic discourse or elections'),
(7, 'Non-consensual behavior'),
(8, 'Pornography or sexualized content'),
(9, 'Protection of minors'),
(10, 'Risk for public security'),
(11, 'Scams or fraud'),
(12, 'Self-harm'),
(13, 'Unsafe, non-compliant, or prohibited products'),
(14, 'Violence'),
(15, 'Other'),
)
assert ILLEGAL_CATEGORIES.api_choices == (
(None, None),
(1, 'animal_welfare'),
(2, 'consumer_information'),
(3, 'data_protection_and_privacy_violations'),
(4, 'illegal_or_harmful_speech'),
(5, 'intellectual_property_infringements'),
(6, 'negative_effects_on_civic_discourse_or_elections'),
(7, 'non_consensual_behaviour'),
(8, 'pornography_or_sexualized_content'),
(9, 'protection_of_minors'),
(10, 'risk_for_public_security'),
(11, 'scams_and_fraud'),
(12, 'self_harm'),
(13, 'unsafe_and_prohibited_products'),
(14, 'violence'),
(15, 'other'),
)
def test_type(self):
addon = addon_factory(guid='@lol')
report = AbuseReport.objects.create(guid=addon.guid)
@ -354,6 +395,10 @@ class TestAbuse(TestCase):
report.user_id = None
constraint.validate(AbuseReport, report)
def test_illegal_category_cinder_value_no_illegal_category(self):
report = AbuseReport()
assert not report.illegal_category_cinder_value
class TestAbuseManager(TestCase):
def test_for_addon_finds_by_author(self):
@ -2333,3 +2378,71 @@ class TestCinderDecision(TestCase):
'You may upload a new version which addresses the policy violation(s)'
not in mail.outbox[0].body
)
@pytest.mark.django_db
@pytest.mark.parametrize(
'illegal_category,expected',
[
(None, None),
(
ILLEGAL_CATEGORIES.ANIMAL_WELFARE,
'STATEMENT_CATEGORY_ANIMAL_WELFARE',
),
(
ILLEGAL_CATEGORIES.CONSUMER_INFORMATION,
'STATEMENT_CATEGORY_CONSUMER_INFORMATION',
),
(
ILLEGAL_CATEGORIES.DATA_PROTECTION_AND_PRIVACY_VIOLATIONS,
'STATEMENT_CATEGORY_DATA_PROTECTION_AND_PRIVACY_VIOLATIONS',
),
(
ILLEGAL_CATEGORIES.ILLEGAL_OR_HARMFUL_SPEECH,
'STATEMENT_CATEGORY_ILLEGAL_OR_HARMFUL_SPEECH',
),
(
ILLEGAL_CATEGORIES.INTELLECTUAL_PROPERTY_INFRINGEMENTS,
'STATEMENT_CATEGORY_INTELLECTUAL_PROPERTY_INFRINGEMENTS',
),
(
ILLEGAL_CATEGORIES.NEGATIVE_EFFECTS_ON_CIVIC_DISCOURSE_OR_ELECTIONS,
'STATEMENT_CATEGORY_NEGATIVE_EFFECTS_ON_CIVIC_DISCOURSE_OR_ELECTIONS',
),
(
ILLEGAL_CATEGORIES.NON_CONSENSUAL_BEHAVIOUR,
'STATEMENT_CATEGORY_NON_CONSENSUAL_BEHAVIOUR',
),
(
ILLEGAL_CATEGORIES.PORNOGRAPHY_OR_SEXUALIZED_CONTENT,
'STATEMENT_CATEGORY_PORNOGRAPHY_OR_SEXUALIZED_CONTENT',
),
(
ILLEGAL_CATEGORIES.PROTECTION_OF_MINORS,
'STATEMENT_CATEGORY_PROTECTION_OF_MINORS',
),
(
ILLEGAL_CATEGORIES.RISK_FOR_PUBLIC_SECURITY,
'STATEMENT_CATEGORY_RISK_FOR_PUBLIC_SECURITY',
),
(
ILLEGAL_CATEGORIES.SCAMS_AND_FRAUD,
'STATEMENT_CATEGORY_SCAMS_AND_FRAUD',
),
(ILLEGAL_CATEGORIES.SELF_HARM, 'STATEMENT_CATEGORY_SELF_HARM'),
(
ILLEGAL_CATEGORIES.UNSAFE_AND_PROHIBITED_PRODUCTS,
'STATEMENT_CATEGORY_UNSAFE_AND_PROHIBITED_PRODUCTS',
),
(ILLEGAL_CATEGORIES.VIOLENCE, 'STATEMENT_CATEGORY_VIOLENCE'),
(ILLEGAL_CATEGORIES.OTHER, 'STATEMENT_CATEGORY_OTHER'),
],
)
def test_illegal_category_cinder_value(illegal_category, expected):
addon = addon_factory()
abuse_report = AbuseReport.objects.create(
guid=addon.guid,
reason=AbuseReport.REASONS.ILLEGAL,
illegal_category=illegal_category,
)
assert abuse_report.illegal_category_cinder_value == expected

Просмотреть файл

@ -16,6 +16,7 @@ from olympia.abuse.serializers import (
)
from olympia.accounts.serializers import BaseUserSerializer
from olympia.amo.tests import TestCase, addon_factory, collection_factory, user_factory
from olympia.constants.abuse import ILLEGAL_CATEGORIES
from olympia.ratings.models import Rating
@ -61,6 +62,7 @@ class TestAddonAbuseReportSerializer(TestCase):
'reason': None,
'report_entry_point': None,
'location': None,
'illegal_category': None,
}
def test_guid_report_addon_exists_doesnt_matter(self):
@ -91,6 +93,7 @@ class TestAddonAbuseReportSerializer(TestCase):
'reason': None,
'report_entry_point': None,
'location': None,
'illegal_category': None,
}
def test_guid_report(self):
@ -120,6 +123,7 @@ class TestAddonAbuseReportSerializer(TestCase):
'reason': None,
'report_entry_point': None,
'location': None,
'illegal_category': None,
}
def test_guid_report_to_internal_value_with_some_fancy_parameters(self):
@ -270,6 +274,7 @@ class TestUserAbuseReportSerializer(TestCase):
'message': 'bad stuff',
'lang': None,
'reason': None,
'illegal_category': None,
}
@ -284,7 +289,10 @@ class TestRatingAbuseReportSerializer(TestCase):
body='evil rating', addon=addon, user=user, rating=1
)
report = AbuseReport(
rating=rating, message='bad stuff', reason=AbuseReport.REASONS.ILLEGAL
rating=rating,
message='bad stuff',
reason=AbuseReport.REASONS.ILLEGAL,
illegal_category=ILLEGAL_CATEGORIES.ANIMAL_WELFARE,
)
request = RequestFactory().get('/')
request.user = AnonymousUser()
@ -305,6 +313,7 @@ class TestRatingAbuseReportSerializer(TestCase):
'reason': 'illegal',
'message': 'bad stuff',
'lang': None,
'illegal_category': 'animal_welfare',
}
@ -338,4 +347,5 @@ class TestCollectionAbuseReportSerializer(TestCase):
'reason': 'feedback_spam',
'message': 'this is some spammy stûff',
'lang': None,
'illegal_category': None,
}

Просмотреть файл

@ -14,7 +14,7 @@ from olympia import amo
from olympia.abuse.tasks import flag_high_abuse_reports_addons_according_to_review_tier
from olympia.activity.models import ActivityLog
from olympia.amo.tests import TestCase, addon_factory, days_ago, user_factory
from olympia.constants.abuse import DECISION_ACTIONS
from olympia.constants.abuse import DECISION_ACTIONS, ILLEGAL_CATEGORIES
from olympia.constants.reviewers import EXTRA_REVIEW_TARGET_PER_DAY_CONFIG_KEY
from olympia.files.models import File
from olympia.reviewers.models import NeedsHumanReview, ReviewActionReason, UsageTier
@ -204,7 +204,10 @@ def test_flag_high_abuse_reports_addons_according_to_review_tier():
def test_addon_report_to_cinder(statsd_incr_mock):
addon = addon_factory()
abuse_report = AbuseReport.objects.create(
guid=addon.guid, reason=AbuseReport.REASONS.ILLEGAL, message='This is bad'
guid=addon.guid,
reason=AbuseReport.REASONS.ILLEGAL,
message='This is bad',
illegal_category=ILLEGAL_CATEGORIES.OTHER,
)
assert not CinderJob.objects.exists()
responses.add(
@ -232,6 +235,7 @@ def test_addon_report_to_cinder(statsd_incr_mock):
'or contains content that '
'violates the law',
'considers_illegal': True,
'illegal_category': 'STATEMENT_CATEGORY_OTHER',
},
'entity_type': 'amo_report',
}
@ -283,7 +287,10 @@ def test_addon_report_to_cinder(statsd_incr_mock):
def test_addon_report_to_cinder_exception(statsd_incr_mock):
addon = addon_factory()
abuse_report = AbuseReport.objects.create(
guid=addon.guid, reason=AbuseReport.REASONS.ILLEGAL, message='This is bad'
guid=addon.guid,
reason=AbuseReport.REASONS.ILLEGAL,
message='This is bad',
illegal_category=ILLEGAL_CATEGORIES.OTHER,
)
assert not CinderJob.objects.exists()
responses.add(
@ -315,6 +322,7 @@ def test_addon_report_to_cinder_different_locale():
reason=AbuseReport.REASONS.ILLEGAL,
message='This is bad',
application_locale='fr',
illegal_category=ILLEGAL_CATEGORIES.OTHER,
)
assert not CinderJob.objects.exists()
responses.add(
@ -341,6 +349,7 @@ def test_addon_report_to_cinder_different_locale():
'or contains content that '
'violates the law',
'considers_illegal': True,
'illegal_category': 'STATEMENT_CATEGORY_OTHER',
},
'entity_type': 'amo_report',
}
@ -401,6 +410,7 @@ def test_addon_appeal_to_cinder_reporter(statsd_incr_mock):
reporter_name='It is me',
reporter_email='m@r.io',
cinder_job=cinder_job,
illegal_category=ILLEGAL_CATEGORIES.OTHER,
)
responses.add(
responses.POST,
@ -461,6 +471,7 @@ def test_addon_appeal_to_cinder_reporter_exception(statsd_incr_mock):
reporter_name='It is me',
reporter_email='m@r.io',
cinder_job=cinder_job,
illegal_category=ILLEGAL_CATEGORIES.OTHER,
)
responses.add(
responses.POST,
@ -499,6 +510,7 @@ def test_addon_appeal_to_cinder_authenticated_reporter():
reason=AbuseReport.REASONS.ILLEGAL,
cinder_job=cinder_job,
reporter=user,
illegal_category=ILLEGAL_CATEGORIES.OTHER,
)
responses.add(
responses.POST,

Просмотреть файл

@ -567,6 +567,46 @@ class AddonAbuseViewSetTestBase:
self._setup_reportable_reason('feedback_spam')
task_mock.assert_not_called()
def test_illegal_category_required_when_reason_is_illegal(self):
addon = addon_factory(guid='@badman')
response = self.client.post(
self.url, data={'addon': addon.guid, 'reason': 'illegal'}
)
assert response.status_code == 400
assert json.loads(response.content) == {
'illegal_category': ['This field is required.']
}
def test_illegal_category_cannot_be_blank_when_reason_is_illegal(self):
addon = addon_factory(guid='@badman')
response = self.client.post(
self.url,
data={
'addon': addon.guid,
'reason': 'illegal',
'illegal_category': '',
},
)
assert response.status_code == 400
assert json.loads(response.content) == {
'illegal_category': ['"" is not a valid choice.']
}
def test_illegal_category_cannot_be_null_when_reason_is_illegal(self):
addon = addon_factory(guid='@badman')
response = self.client.post(
self.url,
data={
'addon': addon.guid,
'reason': 'illegal',
'illegal_category': None,
},
)
assert response.status_code == 400
assert json.loads(response.content) == {
'illegal_category': ['This field may not be null.']
}
class TestAddonAbuseViewSetLoggedOut(AddonAbuseViewSetTestBase, TestCase):
def check_reporter(self, report):
@ -683,7 +723,12 @@ class UserAbuseViewSetTestBase:
def test_message_not_required_with_content_reason(self):
user = user_factory()
response = self.client.post(
self.url, data={'user': str(user.username), 'reason': 'illegal'}
self.url,
data={
'user': str(user.username),
'reason': 'illegal',
'illegal_category': 'animal_welfare',
},
)
assert response.status_code == 201
@ -704,7 +749,12 @@ class UserAbuseViewSetTestBase:
response = self.client.post(
self.url,
data={'user': str(user.username), 'reason': 'illegal', 'message': 'Fine!'},
data={
'user': str(user.username),
'reason': 'illegal',
'message': 'Fine!',
'illegal_category': 'animal_welfare',
},
)
assert response.status_code == 201
@ -792,6 +842,46 @@ class UserAbuseViewSetTestBase:
self.check_report(report, f'Abuse Report for User {user.pk}')
assert report.application_locale == 'Lô-käl'
def test_illegal_category_required_when_reason_is_illegal(self):
user = user_factory()
response = self.client.post(
self.url, data={'user': str(user.username), 'reason': 'illegal'}
)
assert response.status_code == 400
assert json.loads(response.content) == {
'illegal_category': ['This field is required.']
}
def test_illegal_category_cannot_be_blank_when_reason_is_illegal(self):
user = user_factory()
response = self.client.post(
self.url,
data={
'user': str(user.username),
'reason': 'illegal',
'illegal_category': '',
},
)
assert response.status_code == 400
assert json.loads(response.content) == {
'illegal_category': ['"" is not a valid choice.']
}
def test_illegal_category_cannot_be_null_when_reason_is_illegal(self):
user = user_factory()
response = self.client.post(
self.url,
data={
'user': str(user.username),
'reason': 'illegal',
'illegal_category': None,
},
)
assert response.status_code == 400
assert json.loads(response.content) == {
'illegal_category': ['This field may not be null.']
}
class TestUserAbuseViewSetLoggedOut(UserAbuseViewSetTestBase, TestCase):
def check_reporter(self, report):
@ -1284,6 +1374,7 @@ class RatingAbuseViewSetTestBase:
'rating': str(target_rating.pk),
'message': 'abuse!',
'reason': 'illegal',
'illegal_category': 'animal_welfare',
},
REMOTE_ADDR='123.45.67.89',
)
@ -1303,6 +1394,7 @@ class RatingAbuseViewSetTestBase:
'rating': target_rating.pk,
'message': 'abuse!',
'reason': 'illegal',
'illegal_category': 'animal_welfare',
},
REMOTE_ADDR='123.45.67.89',
)
@ -1314,7 +1406,12 @@ class RatingAbuseViewSetTestBase:
def test_no_rating_fails(self):
response = self.client.post(
self.url, data={'message': 'abuse!', 'reason': 'illegal'}
self.url,
data={
'message': 'abuse!',
'reason': 'illegal',
'illegal_category': 'animal_welfare',
},
)
assert response.status_code == 400
assert json.loads(response.content) == {'rating': ['This field is required.']}
@ -1342,7 +1439,12 @@ class RatingAbuseViewSetTestBase:
)
response = self.client.post(
self.url,
data={'rating': str(target_rating.pk), 'message': '', 'reason': 'illegal'},
data={
'rating': str(target_rating.pk),
'message': '',
'reason': 'illegal',
'illegal_category': 'animal_welfare',
},
)
assert response.status_code == 201
@ -1352,7 +1454,11 @@ class RatingAbuseViewSetTestBase:
)
response = self.client.post(
self.url,
data={'rating': str(target_rating.pk), 'reason': 'illegal'},
data={
'rating': str(target_rating.pk),
'reason': 'illegal',
'illegal_category': 'animal_welfare',
},
)
assert response.status_code == 201
@ -1388,6 +1494,7 @@ class RatingAbuseViewSetTestBase:
'rating': str(target_rating.pk),
'message': 'abuse!',
'reason': 'illegal',
'illegal_category': 'animal_welfare',
},
REMOTE_ADDR='123.45.67.89',
HTTP_X_FORWARDED_FOR=f'123.45.67.89, {get_random_ip()}',
@ -1400,6 +1507,7 @@ class RatingAbuseViewSetTestBase:
'rating': str(target_rating.pk),
'message': 'abuse!',
'reason': 'illegal',
'illegal_category': 'animal_welfare',
},
REMOTE_ADDR='123.45.67.89',
HTTP_X_FORWARDED_FOR=f'123.45.67.89, {get_random_ip()}',
@ -1416,6 +1524,7 @@ class RatingAbuseViewSetTestBase:
'rating': str(target_rating.pk),
'message': 'abuse!',
'reason': 'illegal',
'illegal_category': 'animal_welfare',
},
REMOTE_ADDR='123.45.67.89',
HTTP_X_COUNTRY_CODE='YY',
@ -1462,6 +1571,7 @@ class RatingAbuseViewSetTestBase:
'message': 'abuse!',
'reason': 'illegal',
'lang': 'Lô-käl',
'illegal_category': 'animal_welfare',
},
REMOTE_ADDR='123.45.67.89',
)
@ -1471,6 +1581,52 @@ class RatingAbuseViewSetTestBase:
self.check_report(report, f'Abuse Report for Rating {target_rating.pk}')
assert report.application_locale == 'Lô-käl'
def test_illegal_category_required_when_reason_is_illegal(self):
target_rating = Rating.objects.create(
addon=addon_factory(), user=user_factory(), body='Booh', rating=1
)
response = self.client.post(
self.url, data={'rating': str(target_rating.pk), 'reason': 'illegal'}
)
assert response.status_code == 400
assert json.loads(response.content) == {
'illegal_category': ['This field is required.']
}
def test_illegal_category_cannot_be_blank_when_reason_is_illegal(self):
target_rating = Rating.objects.create(
addon=addon_factory(), user=user_factory(), body='Booh', rating=1
)
response = self.client.post(
self.url,
data={
'rating': str(target_rating.pk),
'reason': 'illegal',
'illegal_category': '',
},
)
assert response.status_code == 400
assert json.loads(response.content) == {
'illegal_category': ['"" is not a valid choice.']
}
def test_illegal_category_cannot_be_null_when_reason_is_illegal(self):
target_rating = Rating.objects.create(
addon=addon_factory(), user=user_factory(), body='Booh', rating=1
)
response = self.client.post(
self.url,
data={
'rating': str(target_rating.pk),
'reason': 'illegal',
'illegal_category': None,
},
)
assert response.status_code == 400
assert json.loads(response.content) == {
'illegal_category': ['This field may not be null.']
}
class TestRatingAbuseViewSetLoggedOut(RatingAbuseViewSetTestBase, TestCase):
def check_reporter(self, report):
@ -1499,6 +1655,7 @@ class TestRatingAbuseViewSetLoggedIn(RatingAbuseViewSetTestBase, TestCase):
'rating': str(target_rating.pk),
'message': 'abuse!',
'reason': 'illegal',
'illegal_category': 'animal_welfare',
},
REMOTE_ADDR='123.45.67.89',
HTTP_X_FORWARDED_FOR=f'123.45.67.89, {get_random_ip()}',
@ -1514,6 +1671,7 @@ class TestRatingAbuseViewSetLoggedIn(RatingAbuseViewSetTestBase, TestCase):
'rating': str(target_rating.pk),
'message': 'abuse!',
'reason': 'illegal',
'illegal_category': 'animal_welfare',
},
REMOTE_ADDR='123.45.67.89',
HTTP_X_FORWARDED_FOR=f'123.45.67.89, {get_random_ip()}',
@ -1600,6 +1758,7 @@ class CollectionAbuseViewSetTestBase:
'collection': str(target_collection.pk),
'message': '',
'reason': 'illegal',
'illegal_category': 'animal_welfare',
},
)
assert response.status_code == 201
@ -1608,7 +1767,11 @@ class CollectionAbuseViewSetTestBase:
target_collection = collection_factory()
response = self.client.post(
self.url,
data={'collection': str(target_collection.pk), 'reason': 'illegal'},
data={
'collection': str(target_collection.pk),
'reason': 'illegal',
'illegal_category': 'animal_welfare',
},
)
assert response.status_code == 201
@ -1640,6 +1803,7 @@ class CollectionAbuseViewSetTestBase:
'collection': str(target_collection.pk),
'message': 'abuse!',
'reason': 'illegal',
'illegal_category': 'animal_welfare',
},
REMOTE_ADDR='123.45.67.89',
HTTP_X_FORWARDED_FOR=f'123.45.67.89, {get_random_ip()}',
@ -1652,6 +1816,7 @@ class CollectionAbuseViewSetTestBase:
'collection': str(target_collection.pk),
'message': 'abuse!',
'reason': 'illegal',
'illegal_category': 'animal_welfare',
},
REMOTE_ADDR='123.45.67.89',
HTTP_X_FORWARDED_FOR=f'123.45.67.89, {get_random_ip()}',
@ -1666,6 +1831,7 @@ class CollectionAbuseViewSetTestBase:
'collection': str(target_collection.pk),
'message': 'abuse!',
'reason': 'illegal',
'illegal_category': 'animal_welfare',
},
REMOTE_ADDR='123.45.67.89',
HTTP_X_COUNTRY_CODE='YY',
@ -1721,6 +1887,47 @@ class CollectionAbuseViewSetTestBase:
self.check_report(report, f'Abuse Report for Collection {target_collection.pk}')
assert report.application_locale == 'Lô-käl'
def test_illegal_category_required_when_reason_is_illegal(self):
target_collection = collection_factory()
response = self.client.post(
self.url,
data={'collection': str(target_collection.pk), 'reason': 'illegal'},
)
assert response.status_code == 400
assert json.loads(response.content) == {
'illegal_category': ['This field is required.']
}
def test_illegal_category_cannot_be_blank_when_reason_is_illegal(self):
target_collection = collection_factory()
response = self.client.post(
self.url,
data={
'collection': str(target_collection.pk),
'reason': 'illegal',
'illegal_category': '',
},
)
assert response.status_code == 400
assert json.loads(response.content) == {
'illegal_category': ['"" is not a valid choice.']
}
def test_illegal_category_cannot_be_null_when_reason_is_illegal(self):
target_collection = collection_factory()
response = self.client.post(
self.url,
data={
'collection': str(target_collection.pk),
'reason': 'illegal',
'illegal_category': None,
},
)
assert response.status_code == 400
assert json.loads(response.content) == {
'illegal_category': ['This field may not be null.']
}
class TestCollectionAbuseViewSetLoggedOut(CollectionAbuseViewSetTestBase, TestCase):
def check_reporter(self, report):
@ -1747,6 +1954,7 @@ class TestCollectionAbuseViewSetLoggedIn(CollectionAbuseViewSetTestBase, TestCas
'collection': str(target_collection.pk),
'message': 'abuse!',
'reason': 'illegal',
'illegal_category': 'animal_welfare',
},
REMOTE_ADDR='123.45.67.89',
HTTP_X_FORWARDED_FOR=f'123.45.67.89, {get_random_ip()}',
@ -1762,6 +1970,7 @@ class TestCollectionAbuseViewSetLoggedIn(CollectionAbuseViewSetTestBase, TestCas
'collection': str(target_collection.pk),
'message': 'abuse!',
'reason': 'illegal',
'illegal_category': 'animal_welfare',
},
REMOTE_ADDR='123.45.67.89',
HTTP_X_FORWARDED_FOR=f'123.45.67.89, {get_random_ip()}',

Просмотреть файл

@ -1,4 +1,4 @@
from olympia.api.utils import APIChoicesWithDash
from olympia.api.utils import APIChoicesWithDash, APIChoicesWithNone
APPEAL_EXPIRATION_DAYS = 184
@ -53,3 +53,53 @@ DECISION_ACTIONS.add_subset(
'APPROVING',
('AMO_APPROVE', 'AMO_APPROVE_VERSION'),
)
# Illegal categories, only used when the reason is `illegal`. The constants
# are derived from the "spec" but without the `STATEMENT_CATEGORY_` prefix.
# The `illegal_category_cinder_value` property will return the correct value
# to send to Cinder.
ILLEGAL_CATEGORIES = APIChoicesWithNone(
('ANIMAL_WELFARE', 1, 'Animal welfare'),
(
'CONSUMER_INFORMATION',
2,
'Consumer information infringements',
),
(
'DATA_PROTECTION_AND_PRIVACY_VIOLATIONS',
3,
'Data protection and privacy violations',
),
(
'ILLEGAL_OR_HARMFUL_SPEECH',
4,
'Illegal or harmful speech',
),
(
'INTELLECTUAL_PROPERTY_INFRINGEMENTS',
5,
'Intellectual property infringements',
),
(
'NEGATIVE_EFFECTS_ON_CIVIC_DISCOURSE_OR_ELECTIONS',
6,
'Negative effects on civic discourse or elections',
),
('NON_CONSENSUAL_BEHAVIOUR', 7, 'Non-consensual behavior'),
(
'PORNOGRAPHY_OR_SEXUALIZED_CONTENT',
8,
'Pornography or sexualized content',
),
('PROTECTION_OF_MINORS', 9, 'Protection of minors'),
('RISK_FOR_PUBLIC_SECURITY', 10, 'Risk for public security'),
('SCAMS_AND_FRAUD', 11, 'Scams or fraud'),
('SELF_HARM', 12, 'Self-harm'),
(
'UNSAFE_AND_PROHIBITED_PRODUCTS',
13,
'Unsafe, non-compliant, or prohibited products',
),
('VIOLENCE', 14, 'Violence'),
('OTHER', 15, 'Other'),
)