Skip to content

Latest commit

 

History

History
60 lines (52 loc) · 2.71 KB

philosophy.md

File metadata and controls

60 lines (52 loc) · 2.71 KB

Philosophy

Some scattered thoughts by the architect, Neale.

Hardening

People are going to try to break this thing. It needs to be bulletproof. This pretty much set the entire design:

  • As much as possible is done client-side
    • Participants can attack their own web browsers as much as they feel like
    • Also reduces server load
    • We even made a puzzle category to walk people through creating brute-force attacks!
      • Your laptop is faster than our server
      • We give you the carrot of hashed answers and the hashing function
      • This removes one incentive to DoS the server
  • Generate static content whenever possible
    • Puzzles must be statically compiled before the event even starts
    • As much content as possible is generated by a maintenance loop
  • Minimize dynamic handling
    • There are only three (3) dynamic handlers
      • team registration
      • answer validation
      • server state (open puzzles + event log)
    • You can disable team registration if you want, just remove teamids.txt
    • I even removed token handling once I realized we replicate the user experience with the answer handler and some client-side JavaScript
  • As much as possible is read-only
    • The only read-write directory is state
    • This plays very well with Docker, which didn't exist when we designed MOTH
  • Server code should be as tiny as possible
    • Server should provide highly limited functionality
    • It should be easy to remember in your head everything it does
  • Server is also compiled
    • Static type-checking helps assure no run-time errors
  • Server only tracks who scored how many points at what time
    • This means the scoreboard program determines rankings
    • Want to provide a time bonus for quick answers? I don't, but if you do, you can just modify the scoreboard to do so.
    • Maybe you want to show a graph of team rankings over time: just replay the event log.
    • Want to do some analysis of what puzzles take the longest to answer? It's all there.

Fairness

We spend a lot of time thinking about whether new content is going to feel fair. Or, more importantly, if there's a possibility for it to be viewed as unfair.

It's possible to run fun events that don't focus so much on fairness, but those aren't the type of events we run.

  • People generally don't mind discovering that they could improve
  • People can get furious if they feel like some system is unfairly targeting them
  • Every team that does the same amount of work should have the same score
    • No time bonuses / decaying points
    • No penalties for trying things that don't work out
  • No one should ever feel like it's impossible to catch up
    • Achievements ("cheevos") work well here
    • Time-based awards (flags) don't mesh with this idea