Skip to content

Commit

Permalink
[DATALAD RUNCMD] run codespell throughout fixing typos automagically
Browse files Browse the repository at this point in the history
=== Do not change lines below ===
{
 "chain": [],
 "cmd": "codespell -w",
 "exit": 0,
 "extra_inputs": [],
 "inputs": [],
 "outputs": [],
 "pwd": "."
}
^^^ Do not change lines above ^^^
  • Loading branch information
yarikoptic committed Jul 21, 2024
1 parent 28394a4 commit 556d693
Show file tree
Hide file tree
Showing 92 changed files with 168 additions and 168 deletions.
4 changes: 2 additions & 2 deletions projects/ECoG/exploreAJILE12.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -298,7 +298,7 @@
},
"source": [
"### Access to data on cloud\n",
"The data is hosted on [AMAZON AWS](https://aws.amazon.com) in **S3** buckets. The following steps guide you to locate the data based on the **dandiset** information, setup streaming and reading the data from the cloud. Alternatively, you can access the data on **[DANDI](https://dandiarchive.org/dandiset/000055?search=ajile12&pos=1)**. If you choose to directly download from DANDI, you will need a github account. The following code will be sufficient to programatically download/stream data (either for colab notebook or for your own personal machine)."
"The data is hosted on [AMAZON AWS](https://aws.amazon.com) in **S3** buckets. The following steps guide you to locate the data based on the **dandiset** information, setup streaming and reading the data from the cloud. Alternatively, you can access the data on **[DANDI](https://dandiarchive.org/dandiset/000055?search=ajile12&pos=1)**. If you choose to directly download from DANDI, you will need a github account. The following code will be sufficient to programmatically download/stream data (either for colab notebook or for your own personal machine)."
]
},
{
Expand Down Expand Up @@ -857,7 +857,7 @@
"execution": {}
},
"source": [
"Each subject has multiple experimental sessions. You can check that programatically."
"Each subject has multiple experimental sessions. You can check that programmatically."
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion projects/ECoG/load_ECoG_motor_imagery.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -127,7 +127,7 @@
"\n",
"`dat1` and `dat2` are data from the two blocks performed in each subject. The first one was the actual movements, the second one was motor imagery. For the movement task, from the original dataset instructions:\n",
"\n",
"*Patients performed simple, repetitive, motor tasks of hand (synchronous flexion and extension of all fingers, i.e., clenching and releasing a fist at a self-paced rate of ~1-2 Hz) or tongue (opening of mouth with protrusion and retraction of the tongue, i.e., sticking the tongue in and out, also at ~1-2 Hz). These movements were performed in an interval-based manner, alternating between movement and rest, and the side of move- ment was always contralateral to the side of cortical grid placement.*\n",
"*Patients performed simple, repetitive, motor tasks of hand (synchronous flexion and extension of all fingers, i.e., clenching and releasing a fist at a self-paced rate of ~1-2 Hz) or tongue (opening of mouth with protrusion and retraction of the tongue, i.e., sticking the tongue in and out, also at ~1-2 Hz). These movements were performed in an interval-based manner, alternating between movement and rest, and the side of move- meant was always contralateral to the side of cortical grid placement.*\n",
"\n",
"<br>\n",
"\n",
Expand Down
2 changes: 1 addition & 1 deletion projects/behavior/Loading_CalMS21_data.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -42845,7 +42845,7 @@
" stop_frame=5100,\n",
" annotation_sequence=annotation_sequence)\n",
"\n",
"# Display the animaion on colab\n",
"# Display the animation on colab\n",
"ani"
]
},
Expand Down
4 changes: 2 additions & 2 deletions projects/docs/project_guidance.md
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ We have designed tutorials to help launch your projects. Once you're done with t
(2h) Complete the intro/tutorial/outro for this day
* You will need to use your group's project for some of this content. If you don’t have concrete ideas yet, or you haven’t done a research project before, use one of the provided project templates to walk through the four steps.
* If you are using a project template, your goal is to translate the information from the slide and colab notebook into a 4-step format. Some information might not be readily available in the slide or notebook, and you might have to find it in your literature review later this day.
* Try to write down a few sentences for each of the four steps applied to your project. You will re-use these in your proposal later today.
* Try to write down a few sentences for each of the four steps applied to your project. You will reuse these in your proposal later today.

(2.5h) Literature review: identify interesting papers
The goal of this literature review is to situate your question in context and help you acquire some keywords that you will use in your proposal today.
Expand All @@ -90,7 +90,7 @@ The goal of this literature review is to situate your question in context and he

Project block task:
(3h) Project proposal
* Try to write a proposal for this project based on the way you understand it now. This should re-use some of the text you wrote down for the four steps, and should include keywords and concepts that you identified in your literature review. Don’t worry too much about the structure of this paragraph! The goal is to get as many words (200-300) on paper as possible. You have the entire day 10 to learn how to write a properly structured scientific abstract.
* Try to write a proposal for this project based on the way you understand it now. This should reuse some of the text you wrote down for the four steps, and should include keywords and concepts that you identified in your literature review. Don’t worry too much about the structure of this paragraph! The goal is to get as many words (200-300) on paper as possible. You have the entire day 10 to learn how to write a properly structured scientific abstract.
* It is important to include the concepts which you identified as relevant, and the keywords that go with them.
* When you are ready, please submit your proposal [here](https://airtable.com/shrcYuFYMPh4jGIng). This is not mandatory and can be submitted at any time. We won't evaluate this, but we will use it to keep track of the overall progress of the groups.

Expand Down
4 changes: 2 additions & 2 deletions projects/fMRI/load_bonner_navigational_affordances.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -352,7 +352,7 @@
"execution": {}
},
"source": [
"`Trajs.mat` contain data on the trajectories drawn by subjects during the evaluation phase before main experiment. The data is organised like `[n_images, heigth, width, n_evaluators]` There is a data on 173 images, of which 50 were presented to the participants. The filenames are stored as `dtype`.\n"
"`Trajs.mat` contain data on the trajectories drawn by subjects during the evaluation phase before main experiment. The data is organised like `[n_images, height, width, n_evaluators]` There is a data on 173 images, of which 50 were presented to the participants. The filenames are stored as `dtype`.\n"
]
},
{
Expand All @@ -376,7 +376,7 @@
],
"source": [
"trajs = loadmat('affordances/Trajs.mat')['Trajs']\n",
"fnames = trajs.dtype.names # filenames get loaded as custom dtypes due and type of array is initialy np.void due to peculiarites of how it was saved in Matlab.\n",
"fnames = trajs.dtype.names # filenames get loaded as custom dtypes due and type of array is initially np.void due to peculiarites of how it was saved in Matlab.\n",
"trajs = np.asarray(trajs[0][0].tolist()) # turn np.void into float32\n",
"trajs.shape"
]
Expand Down
4 changes: 2 additions & 2 deletions projects/fMRI/load_cichy_fMRI_MEG.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -151,7 +151,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Downlading data...\n",
"Downloading data...\n",
"Download completed!\n"
]
}
Expand All @@ -173,7 +173,7 @@
" if r.status_code != requests.codes.ok:\n",
" print(\"!!! Failed to download data !!!\")\n",
" else:\n",
" print(\"Downlading data...\")\n",
" print(\"Downloading data...\")\n",
" with open(fname, \"wb\") as fid:\n",
" fid.write(r.content)\n",
" with zipfile.ZipFile(fname, 'r') as zip_ref:\n",
Expand Down
2 changes: 1 addition & 1 deletion projects/fMRI/load_fslcourse.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -371,7 +371,7 @@
"execution": {}
},
"source": [
"Next we will convolve ouur regressors with the HRF. This is because the FMRI signal is a sluggish blood signal that lags behind neural signal. "
"Next we will convolve our regressors with the HRF. This is because the FMRI signal is a sluggish blood signal that lags behind neural signal. "
]
},
{
Expand Down
4 changes: 2 additions & 2 deletions projects/fMRI/load_hcp.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,7 @@
"N_RUNS_TASK = 2\n",
"\n",
"# Time series data are organized by experiment, with each experiment\n",
"# having an LR and RL (phase-encode direction) acquistion\n",
"# having an LR and RL (phase-encode direction) acquisition\n",
"BOLD_NAMES = [\n",
" \"rfMRI_REST1_LR\", \"rfMRI_REST1_RL\",\n",
" \"rfMRI_REST2_LR\", \"rfMRI_REST2_RL\",\n",
Expand Down Expand Up @@ -1287,7 +1287,7 @@
"outputs": [],
"source": [
"task = \"motor\"\n",
"conditions = [\"lf\", \"rf\"] # Run a substraction analysis between two conditions\n",
"conditions = [\"lf\", \"rf\"] # Run a subtraction analysis between two conditions\n",
"\n",
"contrast = []\n",
"for subject in subjects:\n",
Expand Down
6 changes: 3 additions & 3 deletions projects/fMRI/load_hcp_retino.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@
"\n",
"In order to use this dataset, please electronically sign the HCP data use terms at [ConnectomeDB](https://db.humanconnectome.org). Instructions for this are on pp. 24-25 of the [HCP Reference Manual](https://www.humanconnectome.org/storage/app/media/documentation/s1200/HCP_S1200_Release_Reference_Manual.pdf).\n",
"\n",
"The data and experiment are decribed in detail in [Benson et al.](https://jov.arvojournals.org/article.aspx?articleid=2719988#207329261)"
"The data and experiment are described in detail in [Benson et al.](https://jov.arvojournals.org/article.aspx?articleid=2719988#207329261)"
]
},
{
Expand Down Expand Up @@ -102,7 +102,7 @@
"TR = 1 # Time resolution, in sec\n",
"\n",
"# Time series data are organized by experiment, with each experiment\n",
"# having an LR and RL (phase-encode direction) acquistion\n",
"# having an LR and RL (phase-encode direction) acquisition\n",
"RUN_NAMES = [\n",
" \"BAR1\", # Sweeping Bars repeat 1\n",
" \"BAR2\", # Sweeping Bars repeat 2\n",
Expand Down Expand Up @@ -292,7 +292,7 @@
"source": [
"The design matrrix above is made of three columns. One for a cosine wave, one for a sine wave, and one constant columns. \n",
"\n",
"The first two columns together can fit a sinusoid of aritrary phase. The last column will help fit the mean of the data.\n",
"The first two columns together can fit a sinusoid of arbitrary phase. The last column will help fit the mean of the data.\n",
"\n",
"This is a linear model of the form $y = M\\beta$ which we can invert using $\\hat{\\beta}=M^+y$ where $M^+$ is the pseudoinverse of $M$.\n"
]
Expand Down
2 changes: 1 addition & 1 deletion projects/fMRI/load_hcp_task.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -638,7 +638,7 @@
"source": [
"# Visualising the results on a brain\n",
"\n",
"Finally, we will visualise these resuts on the cortical surface of an average brain."
"Finally, we will visualise these results on the cortical surface of an average brain."
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion projects/fMRI/load_hcp_task_with_behaviour.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -569,7 +569,7 @@
"source": [
"# Visualising the results on a brain\n",
"\n",
"Finally, we will visualise these resuts on the cortical surface of an average brain."
"Finally, we will visualise these results on the cortical surface of an average brain."
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion projects/modelingsteps/ModelingSteps_1through4.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -1170,7 +1170,7 @@
"</div>\n",
"\n",
"where *S* is the illusion strength and *N* is the noise level, and *k* is a free parameter.\n",
">we could simply use the frequency of occurance across repetitions as the \"strength of the illusion\"\n",
">we could simply use the frequency of occurrence across repetitions as the \"strength of the illusion\"\n",
"\n",
"We would get the noise as the standard deviation of *v(t)*, i.e.\n",
"\n",
Expand Down
6 changes: 3 additions & 3 deletions projects/modelingsteps/ModelingSteps_5through10.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -289,7 +289,7 @@
"* **outputs**: these are the predictions our model will make that you could portentially measure (e.g., in your idealized experiment)\n",
"* **model functions**: A set of functions that perform the hypothesized computations.\n",
"\n",
"You will thus need to define a set of functions that take your data and some parameters as input, can run your model, and output a prediction for a hypothetical measurment.\n",
"You will thus need to define a set of functions that take your data and some parameters as input, can run your model, and output a prediction for a hypothetical measurement.\n",
"\n",
"**Guiding principles**:\n",
"* Keep it as simple as possible!\n",
Expand Down Expand Up @@ -458,7 +458,7 @@
" - e.g., our intuition is really bad when it comes to dynamical systems\n",
"\n",
"4. Not using standard model testing tools\n",
" - each field has developped specific mathematical tools to test model behaviors. You'll be expected to show such evaluations. Make use of them early on!"
" - each field has developed specific mathematical tools to test model behaviors. You'll be expected to show such evaluations. Make use of them early on!"
]
},
{
Expand Down Expand Up @@ -844,7 +844,7 @@
"\n",
"3. Thinking you don't need figures to explain your model\n",
" - your model draft is a great starting point!\n",
" - make figures that provide intuition about model behavior (just like you would create figures to provide intuition about expeimental data)\n",
" - make figures that provide intuition about model behavior (just like you would create figures to provide intuition about experimental data)\n",
"\n",
"4. My code is too mesy to be published\n",
" - not an option (many journal now rightfully require it)\n",
Expand Down
6 changes: 3 additions & 3 deletions projects/modelingsteps/TrainIllusionModel.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,7 @@
"Our main hypothesis is that the strength of the illusion has a linear relationship to the amplitude of vestibular noise.\n",
"\n",
"Mathematically, this would write as $S = k \\cdot N$, where $S$ is the illusion strength and $N$ is the noise level, and $k$ is a free parameter.\n",
">we could simply use the frequency of occurance across repetitions as the \"strength of the illusion\"\n",
">we could simply use the frequency of occurrence across repetitions as the \"strength of the illusion\"\n",
"\n",
"We would get the noise as the standard deviation of $v(t)$, i.e. $N=\\mathbf{E}[v(t)^2]$, where $\\mathbf{E}$ stands for the expected value.\n",
"\n",
Expand Down Expand Up @@ -480,7 +480,7 @@
"\n",
"So the model seems to work. Running different parameters gives us different results. Are we done?\n",
"* **can we answer our question**: yes, in our model the illusion arises because integrating very noisy vestibular signals representing motion evidence sometimes accumulate to a decision threshold and sometimes do not reach that threshold.\n",
"* **can we speak to our hypothesis**: yes, we can now simulate different trials with different noise levels (and leakage and thrshold parameters) and evaluate the hypothesized linear relationship between vestibular noise and how often our perceptual system is fooled...\n",
"* **can we speak to our hypothesis**: yes, we can now simulate different trials with different noise levels (and leakage and threshold parameters) and evaluate the hypothesized linear relationship between vestibular noise and how often our perceptual system is fooled...\n",
"* **does the model reach our goals**: yes, we wanted to generate a mechanistic model to be able to make some specific predictions that can then be tested experimentally later...\n",
"\n"
]
Expand All @@ -496,7 +496,7 @@
"\n",
"*Part of step 9*\n",
"\n",
"Ok, so we still need to actually evaluate and test our model performance. Since this is a conceptual model and we don't have actual data (yet), we will evaluate how our model behaves as a function of the 3 parameters. If we had data with different conditions, we could try to fit the model to the data and evaluate the goodness of fit, etc... If other alterative models existed, we could evaluate our model against those alternatives too.\n",
"Ok, so we still need to actually evaluate and test our model performance. Since this is a conceptual model and we don't have actual data (yet), we will evaluate how our model behaves as a function of the 3 parameters. If we had data with different conditions, we could try to fit the model to the data and evaluate the goodness of fit, etc... If other alternative models existed, we could evaluate our model against those alternatives too.\n",
"\n",
"So let's run out model in different parameter regimes and analyze the result to get some insight into the model performance"
]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@
"execution": {}
},
"source": [
"##### Multiple cortical areas and depths were measured concurently in each session, at a sample rate of 11Hz.\n",
"##### Multiple cortical areas and depths were measured concurrently in each session, at a sample rate of 11Hz.\n",
"##### Data was collected from excitatory and inhibitory neural populations. "
]
},
Expand Down
2 changes: 1 addition & 1 deletion projects/neurons/load_steinmetz_extra.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -127,7 +127,7 @@
"execution": {}
},
"source": [
"`dat_LFP`, `dat_WAV`, `dat_ST` contain 39 sessions from 10 mice, data from Steinmetz et al, 2019, supplemental to the main data provided for NMA. Time bins for all measurements are 10ms, starting 500ms before stimulus onset (same as the main data). The followin fields are available across the three supplemental files. \n",
"`dat_LFP`, `dat_WAV`, `dat_ST` contain 39 sessions from 10 mice, data from Steinmetz et al, 2019, supplemental to the main data provided for NMA. Time bins for all measurements are 10ms, starting 500ms before stimulus onset (same as the main data). The following fields are available across the three supplemental files. \n",
"\n",
"* `dat['lfp']`: recording of the local field potential in each brain area from this experiment, binned at `10ms`.\n",
"* `dat['brain_area_lfp']`: brain area names for the LFP channels. \n",
Expand Down
2 changes: 1 addition & 1 deletion projects/neurons/load_stringer_orientations.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@
}
],
"source": [
"# @title Install depedencies\n",
"# @title Install dependencies\n",
"!pip install umap-learn --quiet"
]
},
Expand Down
4 changes: 2 additions & 2 deletions projects/theory/motor_RNNs.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -567,7 +567,7 @@
"def plot_reaching_task_stimuli(stimulus, n_targets:int, tsteps:int, T:int):\n",
"\n",
" # plot target cue with \"pulse_steps\" duration\n",
" # at the beginnning of each trial\n",
" # at the beginning of each trial\n",
" stimulus_set = np.arange(0, n_targets,1)\n",
"\n",
" fig, axes = plt.subplots(n_targets, 1, figsize=(30,9))\n",
Expand Down Expand Up @@ -645,7 +645,7 @@
"def plot_force_stimuli(stimulus, n_targets:int, tsteps:int, T:int):\n",
"\n",
" # plot target cue with \"pulse_steps\" duration\n",
" # at the beginnning of each trial\n",
" # at the beginning of each trial\n",
" stimulus_set = np.arange(0, n_targets, 1)\n",
" fig, axes = plt.subplots(n_targets, 1, figsize=(30,9))\n",
" for target in stimulus_set:\n",
Expand Down
2 changes: 1 addition & 1 deletion tutorials/Bonus_Autoencoders/Bonus_Tutorial1.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -1955,7 +1955,7 @@
"```python\n",
"model.apply(init_weights_kaiming_uniform)\n",
"```\n",
"An alternative is to sample from a gaussian distribution $\\mathcal{N}(\\mu, \\sigma^2)$ with $\\mu=0$ and $\\sigma=1/\\sqrt{fan\\_in}$. Example for reseting all but the two last autoencoder layers to Kaiming normal:\n",
"An alternative is to sample from a gaussian distribution $\\mathcal{N}(\\mu, \\sigma^2)$ with $\\mu=0$ and $\\sigma=1/\\sqrt{fan\\_in}$. Example for resetting all but the two last autoencoder layers to Kaiming normal:\n",
"\n",
"```python\n",
"model[:-2].apply(init_weights_kaiming_normal)\n",
Expand Down
4 changes: 2 additions & 2 deletions tutorials/Bonus_Autoencoders/Bonus_Tutorial2.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@
},
"outputs": [],
"source": [
"# @title Install dependecies\n",
"# @title Install dependencies\n",
"!pip install plotly --quiet"
]
},
Expand Down Expand Up @@ -390,7 +390,7 @@
" 3D coordinates\n",
"\n",
" Returns:\n",
" Sperical coordinates (theta, phi) on surface of unit sphere S2.\n",
" Spherical coordinates (theta, phi) on surface of unit sphere S2.\n",
" \"\"\"\n",
"\n",
" x, y, z = (u[:, 0], u[:, 1], u[:, 2])\n",
Expand Down
Loading

0 comments on commit 556d693

Please sign in to comment.