You have to have python 3 installed
git clone [email protected]:thorinaboenke/django-react-note-app-backend.git
cd backend
python3 -m venv .venv
source .venv/bin/activate
pip install django djangorestframework django-cors-headers coverage
python manage.py runserver
App is running on localhost:8000
Urls:
Url | Description |
---|---|
admin/ | Django admin interface |
api/notes | Api Endpoint for Note List |
to run test in folder /backend run
coverage run --omit='*/.venv/*' manage.py test
to check test coverage run
coverage html
a folder htmlcov will be created containing an index.html file. open index.html in browser to inspect test coverage
git clone [email protected]:thorinaboenke/django-react-note-app-frontend.git
cd Frontend
npm install
npm run
App is running on localhost:3000
Fill a note (140 characters max.) in the text field, click 'Post my Note'
Check the Box "I don't like coffee" to hide all notes containing the word coffee.
Cyberbullying/hate speech, misinformation, copyright infringements, illegal content, defamation, breaches of privacy/doxxing, discrimination, trolling, spam
Especially the anonymity may embolden users to angange in abovementioned abusive behaviour.
Ultimately the responsibility lies with the platform concerning illegal content etc. and it may be held accountable if no reasonable preventive measures were taken.
Have clear community guidelines, that not only state what content and behaviours in not accepted, but also encourage positive values like supportivness and mindfulness towards others, inclusive language, creating a safe(r) space together. Users need to agree to and accept these guidelines before posting content.
Implement features so that users can report harmful or inappropriate content, and encourage them to do so.
Filter content for text patterns to prevent doxxing/privacy breaches.
Natural Language processing tools, machine learning to identify and flag threats, hate speech..
Analyze use pattern to detect spam/trolling behaviour.
Image recognition to detect and flag inappropriate or offensive images, hate symbols.
These algorithms can be problematic in and of themselves; and depending on the training datasets might create false positives or replicate racial and gender bias.
Have moderators to review content posted on the app (spot checks, and when flagged as per point 2 and 3) and take action if necessary (prevent publishing, remove content, ban users etc.).
For discussing sensitive and potentially triggering topics (for example self-harm), encourage users to add trigger warnings to their posts and/or to use 'spoilers' formatting.
Be transparent about the measures taken (for example in case content is filtered with algorithms, if use patterns are tracked etc.), no showdow-banning.
Have policies and an action plan on legal issues in place before an incident occurs.