Skip to content

agarnung/threepde

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

38 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

threepde

A 3D visualization of image evolution under partial differential equations

Deploy to GitHub Pages

Want to know more about this field? Check many more interesting PDEs applied to images: image-inpainting-app and Physics Meets Pixels: PDE Models in Image Processing.

A few captures:

Imagen 1
Imagen 2
Imagen 3

Usage

Basic Interaction

  • Upload your own images (WebP-friendly formats)
  • Toggle mesh/wireframe visibility
  • Adjust simulation speed with slider
  • Save current image

Main Options

  • Color maps:

    • constant (uniform color)
    • graymap (grayscale)
    • constant-color (original RGB)
    • constant-chrominance (height L + original ab)
    • Presets: jet, viridis, inferno, seismic, RdYlBu
  • Available PDEs:

    • Heat equation
    • Wave equation
    • Exponential decay (ODE)
  • Boundary conditions:

    • Dirichlet (fixed value)
    • Neumann (fixed derivative)
    • Periodic
    • Robin (mixed)
    • Special cases

Keyboard Shortcuts

Key Action
E Toggle 3D fullscreen
R Run/Pause simulation
S Save current image
N Toggle normalization
Reset simulation

Basic Usage

  1. Select image (or use default)
  2. Configure parameters:
    • PDE type
    • Boundary condition
    • Color map
    • ...
  3. Enable simulation with Run checkbox or press (R)
  4. Enjoy solving the PDE in your image

Technical notes about actions

The input images must be WebP format, since RGBA is read. Converting a monochrome image to WebP and then passing it to the web is valid.

The project folder structure is designed to be modular, optimized, and separate responsibilities, e.g., static web parts from public or asset files, and from CI/CD components:

alejandro@DESKTOP-AIFFN1L:/opt/threepde$ tree
.
├── LICENSE
├── XXXX-XX-XX-three-PDE.md
├── index.html
├── public
│   ├── css
│   │   └── styles.css
│   ├── favicon.ico
│   └── images
│       ├── lena_gray.webp
│       └── lena_rgb.webp
└── src
    ├── helpers
    │   ├── color-maps.js
    │   ├── image-mesh-converter.js
    │   └── image-preprocessor.js
    ├── main.js
    └── solver.js

During development, only files inside src/ are modified. New images (by default, ones the user can select with a selector) are added to assets/images/ and are immediately accessible by the client in their original format without transfer processing. To preview the site during development, you can use Live Server or Live Preview in VSCode (accessible at http://localhost:5500), which serves index.html locally and reloads automatically when saving changes; thus, no backend (Flask, etc.) is required since everything served is static.

We use Three.js via CDN for both development and production, specified in index.html. This avoids duplicates in the repository, leverages browser cache, and uses a minified, optimized version of the library.

<!-- In index.html (ALWAYS use CDN) -->
<script src="https://cdn.jsdelivr.net/npm/[email protected]/build/three.min.js"></script>

We use GitHub Actions to detect pushes to main and automatically minify all JS files in src/ to assets/js/main.min.js — i.e., to minify and deploy the website. This keeps the src/ code intact and only uploads the final optimized version to GitHub Pages, where it is served. This project is served via GitHub Pages, which delivers only static content (HTML, CSS, JS) with no backend required. In contrast, platforms like PythonAnywhere use Flask to run Python servers generating dynamic content. That’s why no backend is needed or configured here.

We should commit our entire src/, index.html, public/ and the YAML (workflow) file, to keep track of them, but not the dist/, because this is a temporary directvory were we copy the "ready-to-publish" files (we create a custom repo PAT with workflow permisson enabled).

name: Deploy to GitHub Pages

on:
  push:
    branches:
      - main

jobs:
  build:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Install dependencies
        run: npm install -g terser

      - name: Build and Minify
        run: |
          mkdir -p dist/assets/js
          mkdir -p dist/assets/css
          cp index.html dist/
          cp -r public/images dist/images
          cp public/favicon.ico dist/
          
          # Minify JS
          npx terser "src/**/*.js" -c -m -o dist/assets/js/main.min.js

          # Minify CSS
          npx cleancss -o dist/assets/css/styles.min.css public/css/styles.css

      - name: Deploy to GitHub Pages
        uses: peaceiris/actions-gh-pages@v3
        with:
          github_token: ${{ secrets.GITHUB_TOKEN }}
          publish_dir: dist

To decide which JS scripts to load from index.html, a conditional loading block based on the current URL could be created, to differentiate whether to load our source code in development or the minified bundle in production:

<!-- Conditional loading based on URL -->
<script>
  if (window.location.hostname === 'localhost') {
    // Development: load individual modules
    document.write('<script src="src/image-processor.js"><\/script>');
    document.write('<script src="src/pde-simulator.js"><\/script>');
  } else {
    // Production: load minified bundle
    const script = document.createElement('script');
    script.src = 'assets/js/main.min.js';
    script.defer = true;
    document.head.appendChild(script);
  }
</script>

Thus, on localhost we develop with individual source files, and on our domain (or here on github.io) the minified bundle is used automatically.

There are two types of images in the project:

  1. Static images, predefined by me in public/images/, which are part of the repository and must be optimized. For example, tools like ImageMagick or squoosh-cli should be used to convert them to an optimized format (e.g., WebP) before uploading, to reduce repository size, improve initial load time, and reduce bandwidth:
convert lena.png -resize 512x512 -quality 99 public/images/lena.webp

npx @squoosh/cli --webp '{quality:99}' lena.png -d public/images/
  1. Images uploaded by the user, coming from their local system, processed immediately and entirely in the browser. The flow is as follows:
The User selects an image in the Browser.

The browser uses the FileReader API (JavaScript) to read the image file.

JavaScript then draws the image onto an HTML Canvas.

From the canvas, JavaScript gets the image pixel data using getImageData().

The pixel data is normalized to values between 0 and 1.

JavaScript uses Three.js to create a heightmap based on the normalized data.

Finally, WebGL renders the heightmap visually.

For curiosity, the complete development and production workflow would be:

Edit files in the src/ folder.

Preview changes using a Live Server.

Check if it works correctly:

    If yes, push to GitHub.

    If no, go back to editing files.

GitHub Actions minify and deploy the project automatically.

The site becomes available at user.github.io/repo.

Additionally, we use MathJax (v3) to write equations, although MathML (native to HTML5) could also be used, but it is less readable and does not support LaTeX. A polyfill is also used to guarantee that some ES6 JavaScript features are available on older browsers that do not support them natively.

The ES6 version (three.module.js) is used with an import map, and not the UMD (three.min.js), as it is the recommended modern version.

We also use Stats.js (from CDN) to display FPS.

Note

Whatever the input image type, PDEs are solved using the luminance channel (even if RGB values are shown on top; the evolution is governed by the monochrome intensity here).

To learn more about boundary conditions: https://en.wikipedia.org/wiki/Boundary_value_problem#Examples. And check more discretization schemes: forward Euler, backward Euler, Crank-Nicholson, forward-backward Euler, etc..

To be precise, exponential decay is an ODE, not a PDE, so there is no spatial propagation and no CFL condition is required, but it does have a stability condition.

In constant-color mode (useOriginalRGB = true), the RGB colors of the image remain completely unchanged — this is like placing the original image as a mantle over the mesh. The surface has no color evolution; the image looks intact. It no longer truly behaves like a heightmap, since the mesh is essentially just being draped with the image as a texture. All heightmap cells share the same color, which looks uniform and gray but is preserved to emphasize colormap functionality. Only in this mode does the 2D canvas display the image at original resolution because the solver does not alter intensity pixel-by-pixel. Other modes downsample images to 512×512 to optimize solver performance.

In contrast, constant-chrominance mode (useOriginalRGB = false) combines the evolving height (L, from lightness or luminance) with the original chrominance (a, b channels from LAB color space). Here, the color changes over time according to the height evolution, but the original chrominance is preserved. This mode gives a more physically informed visual feedback of the PDE evolution, while maintaining some of the original image’s color identity.

References

TODO

  • Regarding the variable needToUpdate: TODO run the solver in another thread and manage a shared variable with a mutex to notify when computation finishes, so rendering continues using the current state until it finishes, avoiding blocking rendering if the solver step is not complete yet. This should be done with a Web Worker, but it is challenging.

  • Allow real-time control of roughness and metalness of the mesh material (see createHeightMesh()). Also allow switching the material type (basic or standard, etc.) in a dedicated Three.js options panel.

  • Add a paragraph about the original "lab mode". Explain that in "mantle mode" the image remains intact (meaning "mantle mode" really means "keep original colors" mode, while the non-mantle mode means "keep original chrominance" mode).

  • Allow users to select the heightmap size? Either by specifying width and height or by width or height while maintaining aspect ratio (not forced 512x512).

  • Fix bundle and minification to serve gh-pages instead of main branch.

  • Habilitar como GPU ON/OFF para algunas sino todas las PDE (i.e. computarlas en shaders). Inspirarse en: https://chrisboligprojects.pythonanywhere.com/vertexWaves.

  • Para interpolar valores de color de la imagen a la malla, usar perspective-correct interpolation.

  • Implementar opción de alternar la resolución llevándola a GPU. No debería convertirse a ImageData en cada step, pues esto es el cuello de botella que ping-pong (heat eq.) o buffer circular (wave eq.) trataría de resolver (alta carga de transferencia CPU-GPU), sino que lo óptimo es, si se está en GPU, renderizar directamente la textura en pantalla usando un quad de pantalla completa (sceneView), por ejemplo.