-
Notifications
You must be signed in to change notification settings - Fork 0
Machine Learning Model Ideas
- Looking at prior work and the data set please provide information and Ideas for what model best fits this data set
- (Some of the documentation is provided with the support of ChatGPT and should not be taken as fact)
- First they split the data set into 80% training data and 20% test data using train_test_split()
- Ran normalization of training data and test data using norm()
- Created a neural network
This code defines a function called "create_model" which takes a single argument "input_len". The purpose of this function is to create a neural network model using the Keras API in Python.
The first line of the function creates an "Input" layer for the model, with the shape of the input being (len(train.columns), ). This means that the input to the model has a shape equal to the number of columns in a data set called "train".
Next, there are six dense layers defined in the code using the "Dense" function from the Keras API. Each dense layer is connected to the previous layer by using the "( )( )" syntax, where the first argument inside the brackets is the previous layer and the second argument inside the brackets is the current layer. The number of units in each dense layer is specified using the "units" argument, and the activation function used by each dense layer is specified using the "activation" argument.
The first three dense layers are defined with 128, 128, and 64 units respectively and with the "relu" activation function. These layers form the bulk of the neural network model, with the input passing through these dense layers and the activations being computed in each layer.
The next three dense layers are defined with 32, 64, and 16 units respectively and with the "sigmoid" activation function. These layers produce the three outputs of the model. The "y1_output", "y2_output", and "y3_output" variables are the output layers of the model, with each having a single unit and a different name specified using the "name" argument.
Finally, a "Model" object is created using the "Model" function from the Keras API. This is used to define the overall architecture of the model and connect all the layers together. The "inputs" argument specifies the input layer of the model, which is the "input_layer" variable created earlier, and the "outputs" argument specifies the output layers of the model, which are the "y1_output", "y2_output", and "y3_output" variables.
The "model.summary()" function call at the end of the code prints a summary of the model, which includes information about the number of parameters and the shapes of the layers in the model.
The function returns the "model" object, which can be used for training, evaluation, or prediction.
Layers are the building blocks of a neural network. In a neural network, layers perform transformations on the input data, and the output from one layer is passed as input to the next layer. The combination of multiple layers in a neural network enables it to model complex relationships between the input and output variables.
In the code, the layers are implemented using the "Dense" function from the Keras API, which creates a dense, fully-connected layer. The "units" argument specifies the number of neurons in the layer, and the "input_shape" argument specifies the shape of the input data.
Activation functions are mathematical functions that are applied element-wise to the outputs from each neuron in a layer. They introduce non-linearity into the output from the neurons, which is essential for modeling complex relationships between the input and output variables in a neural network.
There are several activation functions that are commonly used in neural networks, including:
ReLU (Rectified Linear Unit): This activation function replaces all negative values in the output from a neuron with zero. It is defined as f(x) = max(0, x). Sigmoid: This activation function compresses the output from a neuron to the range between 0 and 1. It is defined as f(x) = 1 / (1 + exp(-x)). Tanh (Hyperbolic Tangent): This activation function compresses the output from a neuron to the range between -1 and 1. It is defined as f(x) = 2 / (1 + exp(-2x)) - 1. In the code, the activation functions are specified using the "activation" argument in the "Dense" function.
- Project
- Added an additional column to the dataset, 'sumFlares', which is the total number of x, c, and m class flares in a row.
Created a graph using PowerBI which maps the influence of the columns on the occurrences of the solar flares. This graph showed that a LargestSpot of 1 or 2 increases the likelihood of a solar flare. They also found that an 'activity' of 1 means that solar flares are much less likely to form.
The Keras model was chosen.
Multiple machine learning modules were used and their accuracies were compared. The one with the highest accuracy was saved. Approximately 10-30% of the data was converted into testing data.