-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.min.html
36 lines (30 loc) · 32.6 KB
/
index.min.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
<!doctype html ><html lang="en"><head><meta charset="UTF-8" name="viewport" content="width=device-width,initial-scale=1"><meta name="description" content="David Abramov's Portfolio. Data Scientist, Visualization Researcher, Creative Coding, Artist"><link href="https://fonts.googleapis.com/css?family=IBM+Plex+Sans" rel="stylesheet"><link rel="stylesheet" href="https://use.typekit.net/vcr2znl.css"><link rel="preconnect" href="https://fonts.googleapis.com"><link rel="preconnect" href="https://fonts.gstatic.com" crossorigin><link href="https://fonts.googleapis.com/css2?family=Montserrat:wght@800&display=swap" rel="stylesheet"><title>David Abramov</title><link rel="stylesheet" href="style.css"></head><body><nav class="col-1"><pattern></pattern></nav><div class="col-2"><header><h2>David Abramov</h2><div id="header-line">Ph.D. in Computational Media, UC Santa Cruz</div><img loading="lazy" id="pin-icon" class="icon" alt="location" src="static/svg/pin.svg" alt="location" onclick="infoToggle("pin")"><div id="pin-text">santa cruz, ca</div><img loading="lazy" id="email-icon" class="icon" alt="email" src="static/svg/email.svg" alt="email" onclick="infoToggle("email")"><div id="email-text">[email protected]</div><img loading="lazy" id="phone-icon" class="icon" alt="phone" src="static/svg/phone-call.svg" alt="phone" onclick="infoToggle("phone")"><div id="phone-text">(708)244-7729</div><img loading="lazy" id="share-icon" class="icon" alt="linkedin" src="static/svg/share.svg" alt="linkedin" onclick="infoToggle("share")"><div id="share-text"><a href="https://www.linkedin.com/in/duvu/">LinkedIn</a></div></header><script>function infoToggle(e){if(info_list=["pin","email","phone","share"],div=document.getElementById(e+"-text"),"none"==div.style.display||""==div.style.display)for(div.style.display="inline-block",i=0;i<info_list.length;i++)info_list[i]!=e&&(other_div=document.getElementById(info_list[i]+"-text"),other_div.style.display="none");else div.style.display="none"}</script><div id="about"><h3>About Me</h3><div id="about-me"><p><img loading="lazy" id="david" src="static/images/david.png" alt="david">I have a Ph.D. in Computational Media at the University of California Santa Cruz in the Creative Coding Lab, and I received my BS in Biology and Physics from DePaul University in Chicago. My research focuses on creating interactive visualization tools for navigating complex scientific datasets and exploring the connection between art and digital media. At UCSC, I have taught Game Technologies, covering how to make a game in Unity. I have been a teaching assistant for Game Graphics and Real-time Rendering, Game Engines, Creating Digital Audio, and Musical Data. During my Ph.D., I have interned at NASA JPL/Caltech/Artcenter, and Universidad de los Andes in Colombia. I presented my research at the IEEE VIS and EuroVis conferences, and have been published in IEEE TVCG, VIS, and EuroVis journals. Before grad school, I worked in Chicago as a Jr. Data Analyst at Tempus Labs, interned at the Adler Planetarium building a weather balloon altitude control system, and worked backstage as an Artistic Associate at Shattered Globe Theatre Company.</p><img loading="lazy" id="file-icon" src="static/svg/file.svg" alt="resume+cv"><div id="file-text"><a href="static/documents/David Abramov - Resume 3_23.pdf">[ Resume ]</a><a href="static/documents/David Abramov - CV.pdf">[ CV ]</a></div></div></div><main class="content"><article><div id="project-header"><h3>Projects</h3></div><div id="projects"><div class="project"><div class="caption"><h4>CosmoVis: An Interactive Visual Analysis Tool for Exploring Hydrodynamic Cosmological Simulations</h4></div><div class="details"><div class="video"><div style="padding:56.25% 0 0 0;position:relative"><iframe src="https://drive.google.com/file/d/1a_X2AfBIoc_1OXIR4yRYXoSWPJzI35O8/preview" style="position:absolute;top:0;left:0;width:100%;height:100%" allow="autoplay"></iframe></div><div style="padding:56.25% 0 0 0;position:relative"><iframe loading="lazy" title="CosmoVis Demo" src="https://player.vimeo.com/video/535010028?portrait=0" style="position:absolute;top:0;left:0;width:100%;height:100%" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" allowfullscreen></iframe></div><script src="https://player.vimeo.com/api/player.js"></script></div><div class="text-description"><p>CosmoVis is an open-source web-based astrophysics visualization tool that facilitates the interactive analysis of large-scale hydrodynamic cosmological simulation datasets. CosmoVis enables astrophysicists as well as citizen scientists to share and explore these datasets, which are often comprised of complex, unwieldy data structures greater that 1 TB in size. Our tool visualizes a range of salient gas, dark matter, and stellar attributes extracted from the source simulations, and enables further analysis of the data using observational analogues, specifically absorption line spectroscopy. CosmoVis introduces novel analysis functionality through the use of "virtual skewers" that define a sightline through the volume to quickly obtain detailed diagnostics about the gaseous medium along the path of the skewer, including synthetic spectra that can be used to make direct comparisons with observational datasets.</p></div><div class="images"><img loading="lazy" alt="cosmic sheets" src="static\images\sheet_use_case_1.png" style="width:100%"> <img loading="lazy" alt="12 Megaparsec EAGLE simulation" src="static\images\512EAGLE12Mpc.PNG" style="width:100%"> <img loading="lazy" alt="25 Megaparsec EAGLE simulation" src="static\images\512EAGLE25Mpc.PNG" style="width:100%"> <img loading="lazy" alt="15 Megarsec slice" src="static\images\512TNG100Mpc_z2_3_slice10.png" style="width:100%"> <img loading="lazy" alt="metallicity" src="static\images\cosmovis_useCase3_updated.png" style="width:100%"> <img loading="lazy" alt="multiple attributes" src="static\images\TNG100_z2.3_teaser_highres_adjusted.png" style="width:100%"></div></div><div class="links"><img loading="lazy" id="file-icon" src="static/svg/file.svg" alt="cosmovis paper"><div id="file-text"><a href="static/documents/Abramov_CosmoVis_TVCG_2022.pdf">[TVCG 2022 Paper ]</a></div><img loading="lazy" id="cv-coding-icon" class="coding-icon" src="static/svg/coding.svg" alt="github"><div class="github-link"><a href="https://github.com/CreativeCodingLab/CosmoVis">[ github ]</a></div></div></div><!-- <div class="project">
<div class="caption">
<h4>Game Technologies - CMPM 121 Course Instructor</h4>
</div>
<div class=details>
</div>
<div class="links">
<img loading="lazy" id="cv-coding-icon" class="coding-icon" src="static/svg/coding.svg" alt="demo" />
<div class="github-link"><a href="https://colev.enflujo.com/pronostico-covid19/">[ demo ]</a></div>
</div>
</div> --><div class="project"><div class="caption"><h4>COLEV Tendencias: An Interactive Word Stream Visualization</h4></div><div class="details"><div style="padding:56.25% 0 0 0;position:relative"><iframe src="https://drive.google.com/file/d/1RMtnS1jJeXH7PWJXI6dWYG5_k8bQ7v1Y/preview" style="position:absolute;top:0;left:0;width:100%;height:100%" allow="autoplay"></iframe></div><div class="text-description"><p>In the summer of 2022 I traveled to Bogota, Colombia to intern in Lab En Flujo ("in flow") and contributed to this words stream visualization using HTML, CSS, JS, THREE.js and custom shader code. The goal of this visualization is to see the frequency of a set of keywords from Twitter how it changes over time by encoding the frequency as the height of the word. In a traditional "word map", a piece of text is analyzed and each word is visualzied once, where the size is determined by the frequency of that word. However in this case, we want to see the use of words changes over time and visualize them in a visual "stream" of words.</p></div><!-- <iframe src="https://drive.google.com/file/d/1RMtnS1jJeXH7PWJXI6dWYG5_k8bQ7v1Y/preview" width="640" height="480" allow="autoplay"></iframe> --><div class="images"><!-- <p>AddiScreenshots</p> --> <img loading="lazy" src="static\images\word_stream.png" style="width:100%"> <img loading="lazy" src="static\images\word_stream_zoom.png" style="width:100%"></div><!-- <div class="video">
<div style="padding:56.25% 0 0 0;position:relative;"><iframe loading="lazy" title="DenseVos Video" src="https://player.vimeo.com/video/524070817?byline=0&portrait=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" frameborder="0" allow="autoplay; fullscreen; picture-in-picture"
allowfullscreen></iframe></div>
<script src="https://player.vimeo.com/api/player.js"></script>
</div>
<div class="images">
<p>Process Work</p>
<img loading="lazy" src="static\images\vos_process1.gif" style="width:100%">
<img loading="lazy" src="static\images\vos_process2.gif" style="width:100%">
</div> --></div><div class="links"><img loading="lazy" id="cv-coding-icon" class="coding-icon" src="static/svg/coding.svg" alt="github"><div class="github-link"><a href="https://github.com/davramov/colev-tendencias">[ github ]</a></div></div></div><div class="project"><div class="caption"><h4>Pronóstico de Covid-19 en Colombia: Covid-19 Forecast in Colombia</h4></div><div class="details"><div style="padding:56.25% 0 0 0;position:relative"><iframe src="https://drive.google.com/file/d/1ZYb-bt9ugEvJlIZEgkL1gDflFZGmhc1T/preview" style="position:absolute;top:0;left:0;width:100%;height:100%" allow="autoplay"></iframe></div><!-- <iframe src="https://drive.google.com/file/d/1ZYb-bt9ugEvJlIZEgkL1gDflFZGmhc1T/preview" width="640" height="480" allow="autoplay"></iframe> --><div class="text-description"><p>I worked on this project in collaboration with Lab En Flujo ("in flow") when I was an intern in Bogota, Colombia. The goal of this tool was to compare recorded Covid-19 case and death rates with the predictions created by statisticians. Users can select between seeing case and death rates, daily and weekly aggregations, and filter future predictions based on the selected date, along with confidence intervals. A tooltip appears when hovering over chart, and users can zoom in to a smaller time period using the slider at the bottom. This interactive data visualization was created using D3.js.</p></div><div class="images"><!-- <p>Screenshots</p>
<img loading="lazy" src="static\images\pronostico.png" style="width:100%"> --></div><!-- <div class="video">
<div style="padding:56.25% 0 0 0;position:relative;"><iframe loading="lazy" title="DenseVos Video" src="https://player.vimeo.com/video/524070817?byline=0&portrait=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" frameborder="0" allow="autoplay; fullscreen; picture-in-picture"
allowfullscreen></iframe></div>
<script src="https://player.vimeo.com/api/player.js"></script>
</div>
<div class="images">
<p>Process Work</p>
<img loading="lazy" src="static\images\vos_process1.gif" style="width:100%">
<img loading="lazy" src="static\images\vos_process2.gif" style="width:100%">
</div> --></div><div class="links"><img loading="lazy" id="cv-coding-icon" class="coding-icon" src="static/svg/coding.svg" alt="demo"><div class="github-link"><a href="https://colev.enflujo.com/pronostico-covid19/">[ demo ]</a></div><img loading="lazy" id="cv-coding-icon" class="coding-icon" src="static/svg/coding.svg" alt="github"><div class="github-link"><a href="https://github.com/davramov/colev-pronostico-covid19">[ github ]</a></div></div></div><div class="project"><div class="caption"><h4>DenseVos</h4></div><div class="details"><div class="video"><div style="padding:56.25% 0 0 0;position:relative"><iframe loading="lazy" title="DenseVos Video" src="https://player.vimeo.com/video/524070817?byline=0&portrait=0" style="position:absolute;top:0;left:0;width:100%;height:100%" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" allowfullscreen></iframe></div><script src="https://player.vimeo.com/api/player.js"></script></div><div class="text-description"><p>“DenseVOS” is an installation that introduces a salient region detection, convolutional architecture (DenseCap) within an existing internationally exhibited artwork “Voice of Sisyphus” (VOS) whose major feature has been to filter selected regions of a single photographic image in varying ways to produce a 4-channel surround sound experience. This revised version uses a live camera feed to provide updated images that are then autonomously parsed by DenseCap to select the regions of interest to be filtered and audified. As the translation of image regions to audio is based on a FFT analysis of the selected regions’ pixels, the intent of the project is to explore to what degree a convolution-network based software trained on 94,000 images and 4,100,1000 region grounded captions, can deliver aesthetically interesting results given that the training has been function driven for object detection and labeling.</p></div><div class="images"><p>Process Work</p><img loading="lazy" src="static\images\vos_process1.gif" style="width:100%"> <img loading="lazy" src="static\images\vos_process2.gif" style="width:100%"></div></div><div class="links"><img loading="lazy" id="cv-coding-icon" class="coding-icon" src="static/svg/coding.svg" alt="github"><div class="github-link"><a href="https://github.com/montanafowler/vos">[ github ]</a></div></div></div><div class="project"><div class="caption"><h4>Fraction8</h4></div><div class="details"><div class="images"><img loading="lazy" src="static\images\hypervelocity\hyp.gif" style="width:100%"><div class="text-description"><p>Fraction8 was a visualization solution for hydrodynamic fluid simulations of spacecraft concept designs at JPL. In collaboration with researchers, we developed an interactive small multiple 3D visualization that could simulataneously display multiple physical parameters. One key task enabled with this tool was displaying relative ion fractions inside and outside of the spacecraft, which would measure the composition of the upper atmosphere of Venus. Using a novel 2D/3D graph implementation, relative quantitative differences are visually encoded in both color as well as height to get a more intuitive sense of different regions.</p></div><img loading="lazy" src="static\images\hypervelocity\image (3).png" style="width:100%"> <img loading="lazy" src="static\images\hypervelocity\image (5).png" style="width:100%"> <img loading="lazy" src="static\images\hypervelocity\image (10).png" style="width:100%"> <img loading="lazy" src="static\images\hypervelocity\image.png" style="width:100%"></div></div><div class="links"><img loading="lazy" id="file-icon" src="static/svg/file.svg" alt="fraction8 presentation"><div id="file-text"><a href="static/images/hypervelocity/FINAL PRESENTATION (edits) copy.pdf">[ presentation ]</a> <a href="https://datavis.caltech.edu/projects/frxn8/">[ about ]</a></div></div></div><div class="project"><div class="caption"><h4>RuleVis</h4></div><div class="details"><!-- <img loading="lazy" src="static\images\teaser_rulevis.png" style="width:100%"> --><div class="videos"><div style="padding:56.25% 0 0 0;position:relative"><iframe loading="lazy" title="RuleVis quick preview video" src="https://player.vimeo.com/video/535101002?autoplay=1&loop=1&portrait=0" style="position:absolute;top:0;left:0;width:100%;height:100%" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" allowfullscreen></iframe></div><script src="https://player.vimeo.com/api/player.js"></script><div class="text-description"><p>RuleVis, a web-based application for defining and editing "correct-by-construction" executable rules that model biochemical functionality, and which can be used to simulate the behavior of protein-protein interaction networks and other complex systems. Our application bridges the graph rewriting and systems biology research communities by providing an external visual representation of salient patterns that experts can use to determine the appropriate level of detail in a particular modeling context. This project is a collaboration between the UCSC Creative Coding Lab and the Walter Fontana Group at Harvard Medical School. Our short paper has been accepted to IEEE VIS 2019. The tool uses the same syntax from the Kappa Language, a rule-based language for modeling interacting networks.</p></div><div style="padding:56.25% 0 0 0;position:relative"><iframe loading="lazy" title="RuleVis IEEEVIS Presentation" src="https://player.vimeo.com/video/374691687?portrait=0" style="position:absolute;top:0;left:0;width:100%;height:100%" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" allowfullscreen></iframe></div><script src="https://player.vimeo.com/api/player.js"></script></div></div><div class="links"><img loading="lazy" id="file-icon" src="static/svg/file.svg" alt="rulevis paper"><div id="file-text"><a href="https://arxiv.org/pdf/1911.04638.pdf">[ paper ]</a></div><img loading="lazy" id="cv-coding-icon" class="coding-icon" src="static/svg/coding.svg" alt="github"><div class="github-link"><a href="https://github.com/CreativeCodingLab/RuleVis">[ github ]</a></div></div></div><div class="project"><div class="caption"><h4>IGM-Vis</h4></div><div class="details"><iframe loading="lazy" title="IGM-Vis Walkthrough" id="igm-vis-demo" width="100%" src="https://www.youtube-nocookie.com/embed/3ZVaExEVZOk" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe><!-- <img loading="lazy" src="static\images\IGM-vis.jpg"> --><div class="text-description"><p>The Intergalactic Media Visualization, or IGM-Vis, is a novel visualization and data analysis platform for investigating galaxies and the gas that surrounds them in context with their larger scale environment, the Cosmic Web. Environment is an important factor in the evolution of galaxies from actively forming stars to a quiescent state with little, if any, discernible star formation activity. The gaseous halos of galaxies (the circumgalactic medium, or CGM) play a critical role in their evolution, because the gas necessary to fuel star formation and any gas expelled from widely observed galactic winds must encounter this interface region between galaxies and the intergalactic medium (IGM).</p></div><img loading="lazy" src="static\images\IGM-vis-1.png"> <img loading="lazy" src="static\images\igmvis\IGM-Vis_Coherence.png"> <img loading="lazy" src="static\images\igmvis\IGM-Vis_EquivalentWidthPlot.png"> <img loading="lazy" src="static\images\igmvis\IGM-Vis_galaxies.png"> <img loading="lazy" src="static\images\igmvis\IGM-Vis_skwererSpectra.png"> <img loading="lazy" src="static\images\igmvis\IGM-Vis_zoomAndFilter.png"></div><div class="links"><img loading="lazy" id="file-icon" src="static/svg/file.svg" alt="igmvis paper"><div id="file-text"><a href="https://creativecoding.soe.ucsc.edu/pdfs/Burchett_IGM_EuroVis2019.pdf">[ paper ]</a></div><img loading="lazy" id="cv-coding-icon" class="coding-icon" src="static/svg/coding.svg" alt="github"><div class="github-link"><a href="https://github.com/CreativeCodingLab/Intergalactic">[ github ]</a></div></div></div><div class="project"><div class="caption"><h4>Throw Me Away</h4></div><div class="details"><div class="video"><div style="padding:56.25% 0 0 0;position:relative"><iframe loading="lazy" title="Throw Me Away" src="https://player.vimeo.com/video/309172057?byline=0&portrait=0" style="position:absolute;top:0;left:0;width:100%;height:100%" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" allowfullscreen></iframe></div><script src="https://player.vimeo.com/api/player.js"></script></div><div class="text-description"><p>"Throw Me Away" is a 2-minute looping video installation that was presented at UCSC's Digital Art and New Media open studio. I combined found and original recorded footage, datamoshing techniques, and an original audio accompanyment that features a corrupt virtual assistant instructing the viewer to "throw me away."</p></div><img loading="lazy" src="static/images/danm-poster.jpg"></div><div class="links"><div id="trash-emoji">🗑</div></div></div><div class="project"><div class="caption"><h4>Manumorph</h4></div><div class="details"><img loading="lazy" src="https://github.com/sarahmfrost/manumorph/raw/master/figures/architecture.png"><div class="text-description"><p>Style transfer, the technique by which the style of one image is applied to the content of another, is one of the most popular and well-known uses of neural network algorithms. Deep Painterly Harmonization is an extension of style transfer, but includes a content object which is placed on the style image. The network then harmonizes the style and the content. We build on Deep Painterly Harmonization, originally implemented in Torch, and re-implement the paper in Tensorflow. We extend the uses of the algorithm to explore different categories of visual media modification. We discuss the ramifications of style harmonization and style transfer on societal concepts of art, and we compare the results of the Tensorflow and Torch algorithms. Finally, we propose a design for a web application that will allow casual creators to create new art using the algorithm, without a strong technical background.</p></div><img loading="lazy" src="https://github.com/sarahmfrost/manumorph/raw/master/figures/4examples.png"><div class="text-description"><p>This paper is motivated by our fascination with style transfer. Both style transfer and deep painterly harmonization effect how we view visual art. Many people view famous paintings as static and unchanging. Deep Painterly Harmonization allows us to re-conceptualize this art as changeable and more relevant to current events and popular culture. We wanted to combine interesting content and style, make controversial new media, and push the harmonization process in new directions.</p></div></div><div class="links"><img loading="lazy" id="file-icon" src="static/svg/file.svg" alt="igmvis paper"><div id="file-text"><a href="static/documents/Manumorph.pdf">[ paper ]</a></div><img loading="lazy" id="cv-coding-icon" class="coding-icon" src="static/svg/coding.svg" alt="github"><div class="github-link"><a href="https://github.com/sarahmfrost/manumorph">[ github ]</a></div></div></div><div class="project"><div class="caption"><h4>Leaf n' Meow</h4></div><div class="details"><img loading="lazy" src="static\images\leafnmeow\sad_cats.PNG" style="width:100%"><div class="text-description"><p>Leaf n' Meow was a final project for my Data Mining class where we wanted to predict plant toxicity to cats based on plant traits. The American Society for the Prevention of Cruelty to Animals (ASPCA) provides a list of plants that are toxic and nontoxic to cats, and we wanted to see if there were any plant-features that could be used to predict plants that do not have their cat toxicity categorized.</p></div><img loading="lazy" src="static\images\leafnmeow\pipline_figure.PNG" style="width:100%"><div class="text-description"><p>We trained several predictive models on plant traits taken from the TRY plant trait database and aligned them with the scientific plant names from the ASPCA. Data cleaning was non-trivial as there were many duplicate plants in both the ASPCA data as well as the TRY database. In the end we found a mix of categorical and numerical attributes that we hypothesized could be accurate in predicting plant toxicity.</p></div><img loading="lazy" src="static\images\leafnmeow\roc_decision_trees.PNG" style="width:100%"> <img loading="lazy" src="static\images\leafnmeow\roc_before_after.PNG" style="width:100%"></div><div class="links"><img loading="lazy" id="file-icon" src="static/svg/file.svg" alt="presentation + paper"><div id="file-text"><a href="static/documents/CSE_243_Final_Report.pdf">[ paper ]</a></div><img loading="lazy" id="cv-coding-icon" class="coding-icon" src="static/svg/coding.svg" alt="github"><div class="github-link"><a href="https://github.com/davramov/LeafNMeow">[ github ]</a></div></div></div><div class="project"><div class="caption"><h4>Coconut Island</h4></div><div class="details"><div style="padding:53.44% 0 0 0;position:relative"><iframe loading="lazy" title="Coconut Island" src="https://player.vimeo.com/video/533805564?loop=1&portrait=0" style="position:absolute;top:0;left:0;width:100%;height:100%" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" allowfullscreen></iframe></div><script src="https://player.vimeo.com/api/player.js"></script><div class="text-description"><p>This is a sound recognition Unity demo I made for students in my Game Graphics lab sections. Coconuts fall from the tree at the response of the bongo drums in the audio loop that I composed and recorded with my OP-1 sampler/synthesizer. In Unity, the students had to import the demo assets (textures, shaders, objects), and modify an audio script to listen for the bongo sound. Using the sound as a trigger, they had to modify a couple of other scripts: a particle emmitter for the coconuts, and a tree shake in another script.</p></div><img loading="lazy" src="static\images\coconut-island.png"></div><div class="links"><img loading="lazy" id="cv-coding-icon" class="coding-icon" src="static/svg/coding.svg" alt="github"><div class="github-link">[ github ]</div></div></div><div class="project"><div class="caption"><h4>VR Location History</h4></div><div class="details"><img loading="lazy" src="https://raw.githubusercontent.com/davramov/immersive-analytics/master/VR%20Location%20History/screenshots/Capture1.PNG"><div class="text-description"><p>I created this VR app for my Immersive Analytics class. The goal of this project was to take some personal data and visualize it in some way. I chose my location history as tracked by Google Maps. I was able to create a 2D heatmap using the MapBox API. I extended this 2D heatmap into VR by placing virtual pins on a realistic elevation view using Unity. The user could use the controllers to navigate the digital world and view where I had been.</p></div><img loading="lazy" src="https://raw.githubusercontent.com/davramov/immersive-analytics/master/VR%20Location%20History/screenshots/Capture.PNG"></div><div class="links"><img loading="lazy" id="cv-coding-icon" class="coding-icon" src="static/svg/coding.svg" alt="github"><div class="github-link"><a href="https://github.com/davramov/immersive-analytics/tree/master/VR%20Location%20History">[ github ]</a></div></div></div><div class="project"><div class="caption"><h4>US Wildfire Data Analysis</h4></div><div class="details"><img loading="lazy" src="static/images/fire/causes.PNG"> <img loading="lazy" src="static/images/fire/Fire Duration.png"> <img loading="lazy" src="wildfire-cluster.png"> <img loading="lazy" src="wildfire-cluster1.png"> <img loading="lazy" src="static/images/fire/cluster1.png"> <img loading="lazy" src="static/images/fire/cluster2.png"> <img loading="lazy" src="static/images/fire/cluster3.png"></div><div class="links"><img loading="lazy" id="file-icon" src="static/svg/file.svg" alt="wildfire analysis paper"><div id="file-text"><a href="https://docs.google.com/document/d/1auVuh37gExeE0vusSE7jsEVLF0sCZb8tvjOdVC45T0w/edit?usp=sharing">[ paper ]</a></div><img loading="lazy" id="cv-coding-icon" class="coding-icon" src="static/svg/coding.svg" alt="github"><div class="github-link"><a href="https://colab.research.google.com/drive/1wvjfEKKlyo6U720KAxLEUKgZCPVTHKs9">[ code ]</a></div></div></div><div class="project"><div class="caption"><h4>Weather Balloon Altitude Control System</h4></div><div class="details"><div class="video"><div style="padding:56.25% 0 0 0;position:relative"><iframe loading="lazy" title="Weather Balloon Flight" src="https://player.vimeo.com/video/184948745?byline=0&portrait=0" style="position:absolute;top:0;left:0;width:100%;height:100%" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" allowfullscreen></iframe></div><script src="https://player.vimeo.com/api/player.js"></script></div><p>One summer during undergrad I had the opportunity to work in the Far Horizons Lab at the Adler Planetarium, where I designed, built and tested an altitude control system for their weather balloon flights. Prior to this device, the team would simply overfill their helium balloons such that they would rise to the edge of the atmosphere, the balloon would grow to the size of a small house, and then pop -- dropping the payload back down to Earth, where the team would track it using GPS transmitters.</p><img loading="lazy" src="static/images/helium-vent-animation.gif"><p>In order to sustain longer flights, I developed a helium venting system using a rubber stopper, linear actuator, pressure sensor, and an Arduino Uno. The system would start in a locked position, where the rubber stopper would seal a connection to a PVC tube attached to the balloon to prevent helium initially leaking out. As the balloon ascends, the altitude control system moves to an intermediate position where helium can vent out. Once the desired altitude is reached, the venting system goes back into the sealed position. This way, the balloon can reach neutral bouyancy in a part of the atmosphere with minimal turbulence. Once the flight is complete, the system moves the stopper to a final position, whereby the PVC insert has enough space to detatch from the balloon.</p><img loading="lazy" src="https://64.media.tumblr.com/6b00d7d153d83c201099f0d93db990b5/tumblr_oeaitaG1iv1vyt7too1_1280.jpg"><a href="http://farhorizonsfellowship.tumblr.com/"><img loading="lazy" src="https://66.media.tumblr.com/86ed69670cb5bc941a84e93f6209b29b/tumblr_oeag76u91I1vyt7too1_1280.jpg"></a></div><div class="links"><img loading="lazy" id="file-icon" src="static/svg/file.svg" alt="altitude control system blog"><div id="file-text"><a href="http://farhorizonsfellowship.tumblr.com/">[ blog ]</a></div><img loading="lazy" id="cv-coding-icon" class="coding-icon" src="static/svg/coding.svg" alt="github"><div class="github-link"><a href="https://github.com/davramov/code/tree/master/far%20horizons">[ github ]</a></div></div></div><div class="project"><div class="caption"><h4>Predicting Clinical Outcomes from Patient Data</h4></div><div class="details"><img loading="lazy" src="survival.png"><div class="text-description"><p>For my Data Analysis and Regression final project I found a publicly available dataset of Non-Small Cell Lung Cancer (NSCLC) Patient Data from the National Center for Biotechnology Information (NCBI), then trained and tested a logistic regression model to predict patient survival using R. The dataset contained data from 478 patients with 28 different variables. I created several survival analysis plots comparing gender and smoking-status to survival probability.</p></div><img loading="lazy" src="static/images/survival2.png"><div class="text-description"><p>I found that gender, age of diagnosis, race, adjuvant radiotherapy, time to first progression or relapse, months to last clinical assessment, and lymph node involvement to be the most predictive factors.</p></div><img loading="lazy" src="static/images/predictive-attributes.png"> <img loading="lazy" src="static/images/survival-roc.PNG"></div><div class="links"><img loading="lazy" id="file-icon" src="static/svg/file.svg" alt="presentation"><div id="file-text"><a href="static/documents/cancer.pptx" download>[ presentation ]</a></div><img loading="lazy" id="cv-coding-icon" class="coding-icon" src="static/svg/coding.svg" alt="github"><div class="github-link"><a href="https://github.com/davramov/Data-Analysis-and-Regression/tree/master/LC_FinalProj">[ github ]</a></div></div></div><div class="project"><div class="caption"><h4>Fake Windows - HTML</h4></div><div class="details"><a href="https://davramov.github.io/web-art-and-design/windows/index.html"><img loading="lazy" loading="lazy" src="windows-throwback.png"></a></div><div class="links"><img loading="lazy" loading="lazy" id="cv-coding-icon" class="coding-icon" src="static/svg/coding.svg" alt="github"><div class="github-link"><a href="https://davramov.github.io/web-art-and-design/windows/index.html">[ github ]</a></div></div></div></div></article></main><footer></footer></div><div class="col-3"></div></body></html><script>function onTouchStart(){}document.addEventListener("touchstart",onTouchStart,{passive:!0})</script>