Skip to content

Latest commit

 

History

History
84 lines (54 loc) · 8.78 KB

README.md

File metadata and controls

84 lines (54 loc) · 8.78 KB

DM i AI 2022

Welcome to the annual DM i AI event hosted by Ambolt ApS.


In this repository, you will find all the information needed to participate in the event. Please read the entire information before proceeding to the use cases, and please make sure to read the full description of every use case. You will be granted points for every use case that you provide a submission for and a total score will be calculated based on the individual submissions.

Use cases

Below you can find the three use cases for the DM i AI 2022 event.
Within each use case, you find a description together with a template that can be used to setup an API endpoint.
The API endpoint will be used for submission and is required. Emily can help with setting up the API, but you should feel free to set them up on your own. The requirements for the API endpoints are specified in the respective use cases.

- Sentiment Analysis
- Pig & Piglet Detection
- Robot Robbers

Clone this GitHub repository to download Emily templates for all three use cases.

git clone https://github.com/amboltio/DM-i-AI-2022.git

Inside the DM-i-AI-2022 folder, you will find the three use cases. To open a use case with Emily type emily open <use-case> e.g. emily open robot-robbers to open the last use case.

Emily CLI

The Emily CLI is built and maintained by Ambolt to help developers and teams implement and run production ready machine learning powered micro services fast and easy. Click here for getting started with Emily. If you sign up to Emily using your student email, you can get free access to the full Emily-CLI with Emily Academy.

Emily can assist you with developing the required API endpoints for the use cases. Together with every use case a predefined and documented template is provided to ensure correct API endpoints and DTOs for the specific use case. You can find the documentation of the entire framework here.
The use cases have been built on top of the FastAPI framework, and can be used to specify endpoints in every use case.

Discord Channel

Come hang out and talk to other competitors of the event on our Discord channel. Discuss the use cases with each other or get in touch with any of the Ambolt staff, to solve issues or questions that may arise during the competition. Join here!

Getting started without emily

You are not required to use Emily for competing in this event, however, we strongly recommend using Emily if you are not an expert in developing APIs and microservices. If you do not choose to use Emily, you should check the individual template and find the requirements for the different API endpoints. These have to be exactly the same for the evaluation service to work. Inside <use-case>/models/dtos.py you can find information on the request and response DTOs, describing the input and output requirements for your API.

Submission

When you are ready for submission, head over to the Submission Form and submit your solution for a use case by providing the host address for your API and the API key we have provided to you. Make sure that you have tested and validated your connection to the API before you submit! Click here for a guide on how to deploy your api with Emily (It is recommended that you go through the full getting started guide).

You can only submit once per use case. We highly recommend that you validate your solution before submitting. You can do this on the submission form by using the QUEUE VALIDATION ATTEMPT button. You can validate as many times as you like, but you can only evaluate once per use case. When you queue validation, your score from the run will show up on the scoreboard, so you can see how you compare to the other teams.

When you validate your solution on the submission form, it will be evaluated on a validation set. When you submit your solution and get the final score for that use case, your solution will be evaluated on a test set which is different from the validation set. This means that the score you obtained when validating your solution may be different from the score you get when evaluating. Therefore, we encourage you not to overfit to the validation set!

Ranked score and total score

The scoreboard will display a score for each usecase and a "total score". The individual score reflects the placement your best model has achieved relative to the other participants' models.

The total score is simply an average of your individual scores.

This format also means that you can loose points / be overtaken by other teams during the week if they submit a model that is better than yours.

Deadline for submission

The deadline for submission is Monday the 10th of October at 14:00.

Final evaluation

Upon completion of the contest, the top 3 highest-ranking teams will be asked to submit their training code and the trained models for validation no later than Tuesday the 11th of October at 14:00 (24 hours after the deadline). The final ranking is announced Friday the 14th of October.

How to get a server for deployment?

When you are doing the submission we are expecting you to host the server at which the REST API can be deployed. You can sign up to Azure for Students, where you will get free credits that you can use to create a virtual machine. We expect you all to be able to do this, since the competition is only for students. Alternatively, you can also deploy your submission locally (This requires a public IP).
The following contains the necessary links for creating a virtual machine:

Please make sure to get a server up and running early in the competition, and make sure to get connection to the evaluation service as quickly as possible, so if you have any server related issues, we can catch them early and not close to deadline!

What if I have already used my Azure student credits?

If you have already used your credicts, reach out to us on either discord or on [email protected] and we will help you out. However, we cannot provide you with GPU servers, so remember to design your solutions such that they can run inference within the time constraints specified for the independent use cases.

Please note, that we do not provide servers for training! You are expected to train your models and solutions using your own hardware, Google Colab, etc.

Frequently Asked Questions

Q: Can I use a pretrained model I found on the internet?

A: Yes you are allowed to use a pretrained model for your task. If you can find a pretrained model fitting your purpose, you would save a lot of time, just like you would do if you were solving a problem for a company.

Q: Should we gather our own data?

A: Yes. You'll not be supplied with data from us. If you need any data to train a model, you should go gather the data yourself. We are not supplying data, as this might limit the creativity and the freedom of how you approach the use case.

Q: How do I use Emily to deploy my service?

A: Emily can help you with deployment of your service, in most cases you can get around deployment by typing emily deploy <your-project>, you will be asked several questions guiding your towards deployment on your server. In this guide you can read more about how to get started using Emily.