diff --git a/content/texts/shape-of-the-problem.md b/content/texts/shape-of-the-problem.md new file mode 100644 index 0000000..da0ab72 --- /dev/null +++ b/content/texts/shape-of-the-problem.md @@ -0,0 +1,71 @@ +--- +title: The Shape of the Problem +slug: shape-of-the-problem +date: 2021.06.29 +description: Ontological data structures, real-time editing, and what a web app _really_ is man. +type: text +--- + +At [work](https://smugmug.com) the other day I was thinking about a problem: Websites as User Generated Content, a part of the business I've been low-key thinking about for nearly a year. A chance conversation with [Reuben Son](https://reubenson.com/) about Vox's content editing tool [Kiln](https://github.com/clay/clay-kiln) altered my perspective on who a _user_ is when thinking about UGC. For them, their users are their editors and authors. Kiln works by loading an editor interface directly over the rendered web page, and allows for editing of any portion of that webpage. + +It occurred to me that one could use a real-time NoSQL database like [FaunaDB](https://fauna.com/) or [Firebase](https://firebase.google.com/) to store a document model, run an app that subscribes to changes to this database and renders the document model into the DOM, than do the same bit where a editor lays over the page and allows for editing, posting changes directly to the NoSQL database. The resulting update would re-render the document for the editor, _and anyone else also currently viewing the app_. This would look like Squarespace, but be naturally multi-tenant. After editing, a static site could be generated and hosted on s3 to serve to a larger audience. Questions around draft states, not publishing another users edits, and other logistical things started to crowd my mind, but the core idea was interesting enough for me to decide to put a prototype together. + +Unfortunately (well…) before I could get to it, I was dropped in cold to a meeting about an impending migration of our knowledge base. We have hundreds of articles with URLs that are going to get 100s of new URLs and we have to update them across out entire application, hopefully also solving the problem for good and never having to do it again. Since our larger content strategy was at the forefront of my mind, I pitched creating a Knowledge Organization System that associates a UUID with an article URL, then consume that UUID in applications and not worry about the URL. Front the whole thing with a content management system and our support team can update article URLs whenever and it's never a problem +again. + +Thats when I realized that _these two problems are the same problem_. Both have the same set of concerns and desired behaviors a collection of structured data documents that support CRUD operations, paired with a visualization of the. document. So: + +- A collection of documents. +- A system for creation and editing of these documents. +- A web app for rendering and visualizing these documents, either in progress or some production state. + +Rephrased, the components are: + +1. The Ontology / Structure +2. The Editor Interface +3. The Rendered Form\ + a. A Dynamic form\ + b. A Static form + +Note that the Dynamic form is not _necessarily_ a server or client-side rendered experience, but a way of seeing the changes you're making before committing to a production state. The Static form then is not _necessarily_ a static asset (it can be server or client side generated from a database) but denotes a _stable production state_ as viewed by a audience. + +For the knowledge base, the ontology is the index of articles. For the UGC websites, it is content and component structure. Thee editor interface is a CMS, either a product or like Kiln, something in house that sits on top. The rendered form for the sites is a static html build, for the knowledge base it's 301 redirect service. + +This is super abstract on purpose! I think this is the core structure of _all web apps_. Since this structure is so general as to be mostly useless (maybe interesting as a teaching tool), the value of a _web app typology_ must come from taking an opinionated perspective on all three of the above points. This can be used as a tool to examine and critique popular web app typologies to validate the concept; how does a given typology express its disposition and behaviors in each component? + +Our first typology for analysis is the "Static Site" – an unchanging directory of html files generated at build time from a codebase, largely on a developers local machine. The ontology of many popular static site generates is the local file system, and attendant metadata. The editor interface is the developers command line and the source code repository (git really is an excellent content management system). The dynamic form is a dev server that runs locally and re-renders changes in real-time, and the static form is the built source code output to static assets. + +Another popular typology is the "Wordpress Site". A [classic in web development](/wordpress-but-not-terriible). Here, the ontology is a MySQL database who's structure is invented ad-hoc by the developer. The editor interface is a PHP (and React I guess with Wordpress 5) web application that allows for manipulation of the MySQL database. The dynamic form is the "preview" of database changes or a "staging" environment for code and data, whole the static form is determined by PHP templates that fetch data form the database and interpolate pages at run time, on each request. + +The "Shopify" is structurally the same as the "Wordpress", but swap PHP for Ruby like a good early 2000's startup and make the Ontology a pre-determined e-commerce structure. I think this is the dominant web app disposition, with a range of opinions on how much should happen on the server and how much should happen on the client. + +Two other typologies I think are worth exploring, the "Notebook" and the "Data Driven Web App". The Notebook, like [Observable](https://observablehq.com/) positions the ontology as an unstructured document. The editor interface is a word processor app for that document, and the rendered form is a combination of pre-set app framework and _the contents of the document_. Dynamic form is a draft state of the document, static form is a published state. Notebooks are very interesting and different, and Observable is a great example of one. Since we're in the neighborhood of Mike Bostock, let's talk about "Data Driven Applications". The ontology _is the data doing the driving_. The rendered form is source code for visualizing the data — dynamic locally run while editing code, static is hosted on a server or cdn. The editor interface is relinquished to whatever real-world process governs the collection of the data. + +Each typology is a powerful conception of what a web app can be, and each one has a unique and distinct perspective on the three important parts of what a web app is. No one is better than the others, since they each have a different relationship. And a different definition of good. + +The shape of my two problems that were actually one problem gave me an idea for a new kind of web app typology, one that borrows from the Semantic Web and real-time web apps. The dispositions and technical behavior of the app would look something like this: + +## 1: The Ontology + +This kind of app will use off-the-shelf RFDA or JSON-LD ontologies. These ontologies can be extended or created, but _must_ be valid RDFA or JSON-LD (either is fine, since they can be machine translated into each other). This allows for deeply semantic structures, machine readable relationships, and lossless data transfer between systems. Structuring this data as a graph allows for narrow, tailored _consumption_ of the dat via GraphQL without dedicated API development and maintenance. It also allows for the entire data system to be visualized. + +Using these semantic ontologies rather than providing a blank slate and letting the data structures grow on an ad-hoc basis also saves _a lot_ of time in creating the data models; since it's all semantic and robot friendly a single URI to the ontology should be enough to populate all the content models any given editor interface might need. + +## 2: The Rendered Form + +The guiding principle of thee rendered form is to be as light as possible over the wire, for both the dynamic and static forms. + +The rendered form splits the difference between a JAMstack real-time application and a static site. The dynamic form is hosted as a web app, and subscribes to real-time changes in the ontology documents. As the data changes, the dynamic form updates to reflect it. This takes the development build off of the local machine and puts it where more than one person can see it at a time. The static form is a built collection of static html files that an be hosted from any CDN to the wider audience. + +## 3: The Editor Interface + +The editor is completely separate from the renderer, but a common protocol unites it with the renderer on one end and the ontology on the other. Load the editor on any given dynamic rendered page to get an experience where _any_ given property of the ontology can be edited, written to the database, and the effects seen immediately in real-time by anyone connected to the dynamic render. The diffs can then be stashed, discarded, or published. + +## Conclusion + +The end result of an app like this would be an experience sort of like [Glitch](https://glitch.com/), sort of like [Squarespace](https://www.squarespace.com/) and sort of like [Observable](https://observablehq.com/). + +With permissions around what rendering or editing app can see or touch what in the database, its possible and even _important_ that any given concept that relates to the system can be represented in the system — either the data itself of a metadata record that is indexical to the data. This allows the entire system to be meaningfully connected, which enables solutions for many common problems (uri mapping, incompatable data structures, duplicated databases, nightmare migrations, vendor lock in) and opens the door to new use cases and implementations, like bespoke CRMs seamlessly integrated into a product or an ecosystem of _editing experiences_ that are completely independant of any given renderer or ontology. + +A clumsy handle for this kind of app could be a Semantic Mono-Database. Not as catchy as JAMstack, SPA, or Static Site. We'll get there tho, I'm sure a meaningful name will present itself as I work to build the first real one of these things. + diff --git a/src/data/resources/index.json b/src/data/resources/index.json index 84ba110..2300d9b 100644 --- a/src/data/resources/index.json +++ b/src/data/resources/index.json @@ -1 +1 @@ -[{"url":"https://threads.js.org/","title":"Web worker meets worker threads - threads.js","description":"Make web workers & worker threads as simple as a function call – worker library for node.js, web and electron. JavaScript/TypeScript, supports webpack.\n","keywords":""},{"url":"https://k6.io/","title":"Load testing for engineering teams | k6","description":"k6 is an open-source load testing tool and cloud service providing the best developer experience for API performance testing.","keywords":""},{"url":"http://www.heppnetz.de/projects/goodrelations/","title":"GoodRelations: The Professional Web Vocabulary for E-Commerce","description":"GoodRelations is the most powerful language for product, price, and company data that can (1) be embedded into existing static and dynamic Web pages and that (2) can be processed by other computers. This increases the visibility of your products and services in the latest generation of search engines, recommender systems, and other novel applications.","keywords":"goodrelations, rdfa, seo, linked data, searchmonkey, yahoo, sem, rdf, owl, rdf-s, e-commerce, semantic web"},{"url":"https://inclusive-components.design/","title":"Inclusive Components","description":"A blog trying to be a pattern library. All about designing inclusive web interfaces, piece by piece.","keywords":""},{"url":"https://uniformcss.com/docs/overview/","title":"Uniform CSS - Overview","description":"Quick dive into Uniform CSS and its features.","keywords":""},{"url":"https://michaelscodingspot.com/javascript-performance-apis/","title":"New browser APIs to detect JavaScript performance problems in production","description":"Talking about APIs for performance metrics in the browser. Find out how much problems your JS code really causes in production.","keywords":""},{"url":"http://docs.mathjax.org/en/latest/","title":"MathJax Documentation — MathJax 3.1 documentation","description":"","keywords":""},{"url":"https://undici.nodejs.org/","title":"Node.js Undici","description":"A HTTP/1.1 client, written from scratch for Node.js.","keywords":""},{"url":"https://upstash.com/","title":"Upstash: Serverless Database for Redis","description":"Designed for the serverless with per-request pricing and Redis API on durable storage.","keywords":""},{"url":"https://traduora.com/","title":"traduora: translation management platform for teams.","description":"traduora | translation management platform for teams","keywords":"localization tool, software localization, internationalization, android localization, ios localization, language localization, app translate, app internationalization, easy translation"},{"url":"http://mikemcl.github.io/big.js/","title":"big.js API","description":"","keywords":""},{"url":"https://svelte-motion.gradientdescent.de/","title":"Svelte-Motion","description":"An animation library for Svelte apps based on Framer Motion.","keywords":""},{"url":"https://www.npmjs.com/package/arquero","title":"arquero","description":"Query processing and transformation of array-backed data tables.","keywords":"data,query,database,table,dataframe,transform,arrays"},{"url":"https://arrow.apache.org/docs/js/","title":"Apache Arrow","description":"Documentation for Apache Arrow","keywords":""},{"url":"https://github.com/uwdata/falcon","title":"uwdata/falcon","description":"Brushing and linking for big data. Contribute to uwdata/falcon development by creating an account on GitHub.","keywords":""},{"url":"https://github.com/vercel/tracing-js","title":"vercel/tracing-js","description":"An implementation of Opentracing API for honeycomb.io - vercel/tracing-js","keywords":""},{"url":"https://getstarted.sh/","title":null,"description":"","keywords":""},{"url":"https://mdsvex.pngwn.io/","title":"mdsvex - svelte in markdown","description":"Combine svelte and markdown in the same file. Live your dreams!","keywords":""},{"url":"https://github.com/Pimm/mapsort","title":"Pimm/mapsort","description":"Performant sorting for complex input. Contribute to Pimm/mapsort development by creating an account on GitHub.","keywords":""},{"url":"https://www.smashingmagazine.com/2021/04/bake-layers-accessibility-testing-process/","title":"How To Bake Layers Of Accessibility Testing Into Your Process — Smashing Magazine","description":"Accessibility experts Kate Kalcevich and Mike Gifford introduce readers to \"layered accessibility testing”, a practice of using a variety of tools and approaches at different stages in the digital product lifecycle to catch accessibility issues early — when it’s easier and cheaper to fix them. ","keywords":""},{"url":"https://gridjs.io/","title":"Grid.js - Advanced JavaScript table plugin","description":"Grid.js is a lightweight JavaScript table plugin that works on all web browsers and devices. Grid.js is open-source and it helps you create advanced tables in seconds!","keywords":"grid, gridjs, grid.js, javascript, javascript table, js table, js grid, jquery, react, table, html, npm, node, angular, vue, typescript"},{"url":"https://uncut.wtf/","title":"Welcome to UNCUT","description":"Libre typeface catalogue, focusing on somewhat contemporary type.","keywords":""},{"url":"https://www.smashingmagazine.com/2021/04/guide-supported-modern-css-pseudo-class-selectors/","title":"A Guide To Newly Supported, Modern CSS Pseudo-Class Selectors — Smashing Magazine","description":"The CSS Working Group Editor’s Draft for [Selectors Level 4](https://drafts.csswg.org/selectors-4/) includes several pseudo-class selectors that already have proposal candidates in most modern browsers. This guide will cover ones that currently have the best support along with examples to demonstrate how you can start using them today!","keywords":""},{"url":"https://amethyst.rs/","title":"Amethyst - The open source, data-driven game engine","description":"Amethyst - The open source, data-driven game engine","keywords":""},{"url":"https://web.dev/efficiently-load-third-party-javascript/","title":"Efficiently load third-party JavaScript","description":"Learn how to avoid the common pitfalls of using third-party scripts to improve load times and user experience. ","keywords":""},{"url":"https://web.dev/cookie-notice-best-practices/","title":"Best practices for cookie notices","description":"Learn about how cookie notices affect performance, performance measurement, and UX. ","keywords":""},{"url":"https://www.smashingmagazine.com/2021/04/css-overflow-issues/","title":"Overflow Issues In CSS — Smashing Magazine","description":"In this article, we will explore the causes of overflow issues and how to solve them. We will also explore how modern features in the developer tools (DevTools) can make the process of fixing and debugging easier.","keywords":""},{"url":"https://www.smashingmagazine.com/2021/04/complete-guide-measure-core-web-vitals/","title":"An In-Depth Guide To Measuring Core Web Vitals — Smashing Magazine","description":"How are Core Web Vitals measured? How do you know your fixes have had the desired effect and when will you see the results in Google Search Console? Let’s figure it out.","keywords":""},{"url":"https://css-tricks.com/a-primer-on-the-different-types-of-browser-storage/","title":"A Primer on the Different Types of Browser Storage | CSS-Tricks","description":"In back-end development, storage is a common part of the job. Application data is stored in databases, files in object storage, transient data in caches…","keywords":""},{"url":"https://www.hsablonniere.com/prevent-layout-shifts-with-css-grid-stacks--qcj5jo/","title":"Prevent layout shifts with CSS grid stacks","description":"Detailed explanation with real examples of a CSS grid technique I used to prevent layout shifts when a component state changes.","keywords":""},{"url":"https://icons.modulz.app/","title":"Radix Icons","description":"A crisp set of 15×15 icons designed by the Modulz team.","keywords":""},{"url":"https://css-tricks.com/comparing-various-ways-to-hide-things-in-css/","title":"Comparing Various Ways to Hide Things in CSS | CSS-Tricks","description":"You would think that hiding content with CSS is a straightforward and solved problem, but there are multiple solutions, each one being unique. Developers","keywords":""},{"url":"https://www.uxbooth.com/articles/designing-user-friendly-data-tables/","title":"Designing User-Friendly Data Tables | UX Booth","description":"","keywords":""},{"url":"https://ui.dev/esmodules/","title":"ES Modules in Depth - ui.dev","description":"In this post you'll learn all the different ways you can export and import values using ES Modules.","keywords":""},{"url":"https://github.com/socketio/socket.io","title":"socketio/socket.io","description":"Realtime application framework (Node.JS server). Contribute to socketio/socket.io development by creating an account on GitHub.","keywords":""},{"url":"https://github.com/jcubic/tagger","title":"jcubic/tagger","description":"Zero Dependency, Vanilla JavaScript Tag Editor. Contribute to jcubic/tagger development by creating an account on GitHub.","keywords":""},{"url":"https://atomiks.github.io/tippyjs/","title":"Tippy.js - Tooltip, Popover, Dropdown, and Menu Library","description":"The complete tooltip, popover, dropdown, and menu solution for the web","keywords":""},{"url":"https://metascraper.js.org/","title":"metascraper, easily scrape metadata from an article on the web.","description":"easily scrape metadata from an article on the web.","keywords":""},{"url":"https://christianheilmann.com/2021/02/08/sharing-data-between-css-and-javascript-using-custom-properties/","title":" Sharing data between CSS and JavaScript using custom properties | Christian Heilmann","description":"","keywords":""},{"url":"https://www.bram.us/2021/01/29/animating-a-css-gradient-border/","title":"Animating a CSS Gradient Border","description":"Recently, Stephanie Eckles sent out a call to revive the use of CSS border-image. Not to use it with images — which requires a pretty nasty syntax — but to create Gradient Borders in CSS. 🎉 Time to revive an old CSS property! When `border-image` was announced, I was off-put b/c the syntax was so … Continue reading \"Animating a CSS Gradient Border\"","keywords":""},{"url":"https://simonhearne.com/2021/layout-shifts-webfonts/","title":"How to avoid layout shifts caused by web fonts","description":"One of the biggest causes of layout shifts for my clients is late-loading web fonts, let's look at how to optimise them!","keywords":"webperf"},{"url":"https://github.com/alesgenova/post-me","title":"alesgenova/post-me","description":"📩 Use web Workers and other Windows through a simple Promise API - alesgenova/post-me","keywords":""},{"url":"https://runjs.app/","title":"RunJS","description":"RunJS is a modern JavaScript and TypeScript playground, displaying instant results as you type and providing access to Node and browser APIs.","keywords":""},{"url":"https://responsibleweb.app/","title":"Responsible Web Applications","description":"HTML and CSS Tips and Tricks for creating applications that are both responsive and accessible out of the box","keywords":""},{"url":"https://inspyr.io/","title":"inspyr - Handpicked set of beautifully designed SVG icons","description":"Handpicked set of beautifully designed SVG icons","keywords":""},{"url":"https://useparcel.com/","title":"Parcel: The Code Editor Built for Email Development","description":"Parcel streamlines your development workflow to help you rapidly code high-quality emails. Learn more today!","keywords":""},{"url":"https://alesgenova.github.io/concurrent-wasm-workers/","title":"Running Rust in WebAssembly in a Pool of Concurrent Web Workers in JavaScript","description":"\n\n","keywords":""},{"url":"https://www.jonmellman.com/posts/promise-memoization/","title":"Advanced Promise Patterns: Promise Memoization - Blog by Jon Mellman","description":"Memoizing async methods to simplify caching and avoid common race conditions.","keywords":""},{"url":"https://github.com/isaacHagoel/svelte-dnd-action","title":"isaacHagoel/svelte-dnd-action","description":"An action based drag and drop container for Svelte - isaacHagoel/svelte-dnd-action","keywords":""},{"url":"https://www.smashingmagazine.com/2021/01/front-end-performance-2021-free-pdf-checklist/","title":"Front-End Performance Checklist 2021 — Smashing Magazine","description":"Let’s make 2021... fast! An annual front-end performance checklist, with everything you need to know to create fast experiences on the web today, from metrics to tooling and CSS/JavaScript techniques.","keywords":""},{"url":"https://www.websitecarbon.com/","title":"Website Carbon Calculator | How is your website impacting the planet?","description":"The internet consumes a lot of electricity. 416.2TWh per year to be precise. To give you some perspective, that’s more than the entire United Kingdom.","keywords":""},{"url":"https://stimulus.hotwire.dev/","title":"Stimulus: A modest JavaScript framework for the HTML you already have.","description":"Stimulus is a JavaScript framework with modest ambitions. It doesn’t seek to take over your entire front-end—in fact, it’s not concerned with rendering HTML at all. Instead, it’s designed to augment your HTML with just enough behavior to make it shine.","keywords":""},{"url":"https://cableready.stimulusreflex.com/","title":"Welcome","description":"Server-side Ruby making magic happen on the client, in real-time","keywords":""},{"url":"https://github.com/github/view_component","title":"github/view_component","description":"A framework for building reusable, testable & encapsulated view components in Ruby on Rails. - github/view_component","keywords":""},{"url":"https://alistapart.com/article/human-readable-javascript/","title":"Human-Readable JavaScript: A Tale of Two Experts","description":"JavaScript gives us many ways to do things, but deciding which way can be tricky. Laurie Barth gives us a story of two experts who solve this problem in different ways, giving some insight into how…","keywords":""},{"url":"https://docs.stimulusreflex.com/","title":"Welcome","description":"Build reactive applications with the Rails tooling you already know and love","keywords":""},{"url":"https://wormhole.app/","title":"Wormhole - Simple, private file sharing","description":"Wormhole lets you share files with end-to-end encryption and a link that automatically expires.","keywords":""},{"url":"https://moderncss.dev/modern-css-upgrades-to-improve-accessibility/","title":"Modern CSS Upgrades To Improve Accessibility | Modern CSS Solutions","description":"Accessibility is a critical skill for developers doing work at any point in the stack. For front-end tasks, modern CSS provides capabilities we can leverage to make layouts more accessibly inclusive for users of all abilities across any device.","keywords":""},{"url":"https://www.smashingmagazine.com/2021/04/complete-guide-html-email-templates-tools/","title":"A Complete Guide To HTML Email — Smashing Magazine","description":"A complete guide to HTML email templates, tools, resources and guides. Everything you need to know about designing and building HTML Email in 2021.","keywords":""},{"url":"https://css-tricks.com/comparing-the-new-generation-of-build-tools/","title":"Comparing the New Generation of Build Tools | CSS-Tricks","description":"A bunch of new developer tools have landed in the past year and they are biting at the heels of the tools that have dominated front-end development over","keywords":""},{"url":"https://css-tricks.com/html-inputs-and-labels-a-love-story/","title":"HTML Inputs and Labels: A Love Story | CSS-Tricks","description":"Most inputs have something in common — they are happiest with a companion label! And the happiness doesn’t stop there. Forms with proper inputs and labels","keywords":""},{"url":"https://www.sitepoint.com/build-frameworkless-web-app-modern-javascript-web-components/","title":"Build a Web App with Modern JavaScript and Web Components - SitePoint","description":"Web apps don't require a JS framework! Learn how to build a feature-rich, lightweight and dependency-free web app with web components and observables.","keywords":""},{"url":"https://waterfaller.dev/","title":"One-of-a-Kind Core Web Vitals Tool for Technical SEO - Waterfaller","description":"Waterfaller is used by site owners, project managers, and technical SEO professionals to find and fix core web vital issues.","keywords":""},{"url":"https://pnpm.io/","title":"Hello from pnpm | pnpm","description":"Fast, disk space efficient package manager","keywords":""},{"url":"https://www.baseclass.io/guides/string-handling-modern-js","title":"The complete guide to working with strings in modern JavaScript","description":"Everything you need to know about creating, manipulating and comparing strings in JavaScript.","keywords":""},{"url":"https://seanbarry.dev/posts/switch-true-pattern/","title":"Using the Switch(true) Pattern in JavaScript - Seán Barry","description":"The switch true pattern isn't well known but it is incredibly useful. It's not a JavaScript specific pattern, but I use it in almost every single project.","keywords":""},{"url":"https://pustelto.com/blog/css-vs-css-in-js-perf/","title":"Real-world CSS vs. CSS-in-JS performance comparison","description":"Personal website of Tomas Pustelnik. A front-end developer with focus on HTML/CSS, React, performance and accessibility. Founder of Qjub.app and tooling addict.","keywords":""},{"url":"https://www.youtube.com/watch?v=B3uH517XbPs","title":"Mage ASCII Game Engine","description":"In this video we're creating an ASCII game engine that runs in the browser! It should be flexible enough that we can build many different kinds of games in i...","keywords":"javascript, js"},{"url":"https://css-tricks.com/memorize-scroll-position-across-page-loads/","title":"Memorize Scroll Position Across Page Loads | CSS-Tricks","description":"Hakim El Hattab tweeted a really nice little UX enhancement for a static site that includes a scrollable sidebar of navigation.","keywords":""},{"url":"https://www.giftegwuenu.com/how-to-switch-logo-in-dark-mode/","title":"How to Switch Logo in Dark Mode","description":"I share how I was able to switch the color of my logo in dark mode using CSS custom properties.","keywords":"HTML,CSS,JavaScript,Vue,React,Accessibility"},{"url":"https://tablericons.com/","title":"Tabler Icons","description":"800+ Customizable free SVG icons MIT licensed - Tabler Icons. Configurable stroke, color and size.","keywords":""},{"url":"https://squircley.app/","title":"Squircley | SVG Squircle Maker","description":"Create and export beautiful SVG squircles to use in your designs!","keywords":"Squircle, SVG, Generator, Maker, Design, Icon, Shape, UI, HTML, CSS"},{"url":"https://shareon.js.org/","title":"shareon","description":"Lightweight, stylish and ethical share buttons for popular social networks","keywords":"share buttons, sharing, social networks, share via, share on"},{"url":"https://dmitripavlutin.com/javascript-event-delegation/","title":"A Simple Explanation of Event Delegation in JavaScript","description":"The event delegation is an useful pattern to listen for events on multiple elements using just one event handler.","keywords":""},{"url":"https://medium.com/teads-engineering/generating-uuids-at-scale-on-the-web-2877f529d2a2","title":"Generating UUIDs at scale on the Web","description":"Is it possible to generate a billion unique identifiers per day in the browser? At Teads, we have tried, and the answer is yes - if you ignore bots and bugs. This article describes the experiments we've run and the discoveries we made along the way.","keywords":""},{"url":"https://github.com/fymmot/inclusive-dates","title":"fymmot/inclusive-dates","description":"A human-friendly datepicker. Supports natural language manual input through Chrono.js. Fully accessible with keyboard and screen reader. Contributions welcome! - fymmot/inclusive-dates","keywords":""},{"url":"https://github.com/malinajs/malinajs","title":"malinajs/malinajs","description":"Frontend compiler, inspired by Svelte. Contribute to malinajs/malinajs development by creating an account on GitHub.","keywords":""},{"url":"https://nicolodavis.com/blog/typescript-to-rust/","title":"Moving from TypeScript to Rust / WebAssembly | nicolodavis.com","description":"My experience with both languages.","keywords":""},{"url":"https://www.bitnative.com/2020/07/06/four-ways-to-fetch-data-in-react/","title":"Four Ways to Fetch Data in React – Cory House","description":"","keywords":""},{"url":"https://www.samanthaming.com/tidbits/71-how-to-flatten-array-using-array-flat/","title":"Flatten Array using Array.flat() in JavaScript | SamanthaMing.com","description":"It was always complicated to flatten an array in JS. Not anymore! ES2019 introduced a new method that flattens arrays with Array.flat()...","keywords":""},{"url":"https://github.com/lmammino/financial#readme","title":"lmammino/financial","description":"A Zero-dependency TypeScript/JavaScript financial library (based on numpy-financial) for Node.js, Deno and the browser - lmammino/financial","keywords":""},{"url":"https://fredriknoren.github.io/jsplot/","title":"jsplot","description":"jsplot","keywords":""},{"url":"https://stephaniewalter.design/blog/designing-adaptive-components-beyond-responsive-breakpoints/","title":"Designing Adaptive Components, Beyond Responsive Breakpoints by Stéphanie Walter - UX designer & Mobile Expert.","description":"Designing systems of reusable components that adapt to responsive layouts, containers, work with different content states and adapt to user needs, behaviour and context.","keywords":""},{"url":"https://www.smashingmagazine.com/2020/07/css-techniques-legibility/","title":"Modern CSS Techniques To Improve Legibility — Smashing Magazine","description":"In this article, we cover how we can improve websites legibility using some modern CSS techniques, great new technologies like variable fonts and putting into practise what we learned from doing scientific researches.","keywords":""},{"url":"https://ishadeed.com/article/css-multiple-backgrounds/","title":"Understanding CSS Multiple Backgrounds - Ahmad Shadeed","description":"How to use CSS multiple backgrounds","keywords":"css backgrounds, multiple, background longhand"},{"url":"https://css-tricks.com/lazy-loading-images-in-svelte/","title":"Lazy Loading Images in Svelte | CSS-Tricks","description":"One easy way to improve the speed of a website is to only download images only when they’re needed, which would be when they enter the viewport. This","keywords":""},{"url":"https://whatthefork.is/memoization","title":"What the fork is memoization? ・ Dan’s JavaScript Glossary","description":"Dan’s JavaScript Glossary","keywords":""},{"url":"https://www.smashingmagazine.com/2020/07/introduction-stimulusjs/","title":"An Introduction To Stimulus.js — Smashing Magazine","description":"In this article, Mike Rogers will introduce you to Stimulus, a modest JavaScript framework that complements your existing HTML. By the end, you’ll have an understanding of the premise of Stimulus and why it’s a useful tool to have in your backpack.","keywords":""},{"url":"https://ishadeed.com/article/pixel-perfection/","title":"The State Of Pixel Perfection - Ahmad Shadeed","description":"A walkthrough of the term pixel perfection and if it's still relevant today or not.","keywords":"pixel perfection, css, look and feel"},{"url":"https://www.smashingmagazine.com/2020/07/design-wireframes-accessible-html-css/","title":"Translating Design Wireframes Into Accessible HTML/CSS — Smashing Magazine","description":"In this article, Harris Schneiderman walks you through the process of analyzing a wireframe and making coding decisions to optimize for accessibility.","keywords":""},{"url":"https://webcomponents.dev/blog/all-the-ways-to-make-a-web-component/","title":"All the Ways to Make a Web Component - May 2021 Update","description":"Compare coding style, bundle size and performance of 55 different ways to make a Web Component.","keywords":""},{"url":"https://m.signalvnoise.com/how-we-achieve-simple-design-for-basecamp-and-hey/","title":"How we achieve “simple design” for Basecamp and HEY","description":"Yesterday I got an email asking how we achieve simple designs for Basecamp and HEY, so I hastily tweeted a screenshot of my answer, and a lot of people responded to it. A few folks pointed out that…","keywords":""},{"url":"https://propjockey.github.io/css-media-vars/","title":"css-media-vars from PropJockey","description":"","keywords":""},{"url":"https://www.smashingmagazine.com/2020/07/tiny-desktop-apps-tauri-vuejs/","title":"Creating Tiny Desktop Apps With Tauri And Vue.js — Smashing Magazine","description":"Tauri is a toolchain for creating small, fast, and secure desktop apps from your existing HTML, CSS, and JavaScript. In this article, Kelvin explains how Tauri plays well with the progressive framework Vue.js by integrating both technologies in bundling an example web app called **nota** as a native application.","keywords":""},{"url":"https://nicolodavis.com/blog/typescript-to-rust/","title":"Moving from TypeScript to Rust / WebAssembly | nicolodavis.com","description":"My experience with both languages.","keywords":""},{"url":"https://css-tricks.com/building-serverless-graphql-api-in-node-with-express-and-netlify/","title":"Building Serverless GraphQL API in Node with Express and Netlify | CSS-Tricks","description":"I’ve always wanted to build an API, but was scared away by just how complicated things looked. I’d read a lot of tutorials that start with “first, install","keywords":""},{"url":"https://blog.tailwindcss.com/building-the-tailwind-blog","title":"Building the Tailwind Blog with Next.js – Tailwind CSS","description":"One of the things we believe as a team is that everything we make should be sealed with a blog post. Forcing ourselves to write up a short announcement post for every project we work on acts as a built-in quality check, making sure that we never call a project \"done\" until we feel comfortable telling the world it's out there. The problem was that up until today, we didn't actually have anywhere to publish those posts!","keywords":""},{"url":"https://riccardoscalco.it/textures/","title":"Textures.js","description":"A JavaScript Library for creating SVG patterns","keywords":"svg, patterns, javascript, d3, textures"},{"url":"https://www.zachleat.com/web/speedlify/","title":"Use Speedlify to Continuously Measure Site Performance—zachleat.com","description":"A post by Zach Leatherman (zachleat)","keywords":""},{"url":"https://keyframes.app/","title":"Keyframes.app | CSS Toolbox","description":"Keyframes helps you write better CSS with a suite of tools to create CSS @Keyframe animations, box shadows, colors, & more","keywords":"Keyframes, animation, css, css3, transform, translate, html, javascript, web design, web developemtn, web animations, editor, generator"},{"url":"https://www.taniarascia.com/understanding-template-literals/","title":"Understanding Template Literals in JavaScript","description":"This article was originally written for DigitalOcean. Introduction The 2015 edition of the ECMAScript specification (ES6) added template…","keywords":""},{"url":"https://medium.com/@abulka/todomvc-implemented-using-a-game-architecture-ecs-88bb86ea5e98","title":"TodoMVC implemented using a game architecture — ECS.","description":"It turns out that the answer is yes! Whilst ECS is most commonly used in building games, it can also be used for building a traditional web “form” style application like TodoMVC. However you will…","keywords":""},{"url":"https://github.com/jamesroutley/24a2","title":"jamesroutley/24a2","description":"🏵 An ultra-minimalist game engine. Contribute to jamesroutley/24a2 development by creating an account on GitHub.","keywords":""},{"url":"https://www.w3.org/TR/wai-aria-practices-1.1/examples/combobox/aria1.1pattern/listbox-combo.html","title":"ARIA 1.1 Combobox with Listbox Popup Examples | WAI-ARIA Authoring Practices 1.1","description":"","keywords":""},{"url":"https://dev.to/mpodlasin/3-most-common-mistakes-when-using-promises-in-javascript-oab","title":"3 most common mistakes when using Promises in JavaScript","description":"Promises rule JavaScript. Even nowadays, with introduction of async/await, they are still an obligato... Tagged with javascript.","keywords":"javascript, software, coding, development, engineering, inclusive, community"},{"url":"https://daybrush.com/moveable/","title":"Moveable is Draggable! Resizable! Scalable! Rotatable! Warpable! Pinchable!","description":"Moveable is Draggable! Resizable! Scalable! Rotatable! Warpable! Pinchable!","keywords":""}] \ No newline at end of file +[{"url":"https://threads.js.org/","title":"Web worker meets worker threads - threads.js","description":"Make web workers & worker threads as simple as a function call – worker library for node.js, web and electron. JavaScript/TypeScript, supports webpack.\n","keywords":""},{"url":"https://k6.io/","title":"Load testing for engineering teams | k6","description":"k6 is an open-source load testing tool and cloud service providing the best developer experience for API performance testing.","keywords":""},{"url":"http://www.heppnetz.de/projects/goodrelations/","title":"GoodRelations: The Professional Web Vocabulary for E-Commerce","description":"GoodRelations is the most powerful language for product, price, and company data that can (1) be embedded into existing static and dynamic Web pages and that (2) can be processed by other computers. This increases the visibility of your products and services in the latest generation of search engines, recommender systems, and other novel applications.","keywords":"goodrelations, rdfa, seo, linked data, searchmonkey, yahoo, sem, rdf, owl, rdf-s, e-commerce, semantic web"},{"url":"https://inclusive-components.design/","title":"Inclusive Components","description":"A blog trying to be a pattern library. All about designing inclusive web interfaces, piece by piece.","keywords":""},{"url":"https://uniformcss.com/docs/overview/","title":"Uniform CSS - Overview","description":"Quick dive into Uniform CSS and its features.","keywords":""},{"url":"https://michaelscodingspot.com/javascript-performance-apis/","title":"New browser APIs to detect JavaScript performance problems in production","description":"Talking about APIs for performance metrics in the browser. Find out how much problems your JS code really causes in production.","keywords":""},{"url":"http://docs.mathjax.org/en/latest/","title":"MathJax Documentation — MathJax 3.2 documentation","description":"","keywords":""},{"url":"https://undici.nodejs.org/","title":"Node.js Undici","description":"A HTTP/1.1 client, written from scratch for Node.js.","keywords":""},{"url":"https://upstash.com/","title":"Upstash: Serverless Database for Redis","description":"Designed for the serverless with per-request pricing and Redis API on durable storage.","keywords":""},{"url":"https://traduora.com/","title":"traduora: translation management platform for teams.","description":"traduora | translation management platform for teams","keywords":"localization tool, software localization, internationalization, android localization, ios localization, language localization, app translate, app internationalization, easy translation"},{"url":"http://mikemcl.github.io/big.js/","title":"big.js API","description":"","keywords":""},{"url":"https://svelte-motion.gradientdescent.de/","title":"Svelte-Motion","description":"An animation library for Svelte apps based on Framer Motion.","keywords":""},{"url":"https://www.npmjs.com/package/arquero","title":"arquero","description":"Query processing and transformation of array-backed data tables.","keywords":"data,query,database,table,dataframe,transform,arrays"},{"url":"https://arrow.apache.org/docs/js/","title":"Apache Arrow","description":"Documentation for Apache Arrow","keywords":""},{"url":"https://github.com/uwdata/falcon","title":"uwdata/falcon","description":"Brushing and linking for big data. Contribute to uwdata/falcon development by creating an account on GitHub.","keywords":""},{"url":"https://github.com/vercel/tracing-js","title":"vercel/tracing-js","description":"An implementation of Opentracing API for honeycomb.io - vercel/tracing-js","keywords":""},{"url":"https://getstarted.sh/","title":null,"description":"","keywords":""},{"url":"https://mdsvex.pngwn.io/","title":"mdsvex - svelte in markdown","description":"Combine svelte and markdown in the same file. Live your dreams!","keywords":""},{"url":"https://github.com/Pimm/mapsort","title":"Pimm/mapsort","description":"Performant sorting for complex input. Contribute to Pimm/mapsort development by creating an account on GitHub.","keywords":""},{"url":"https://www.smashingmagazine.com/2021/04/bake-layers-accessibility-testing-process/","title":"How To Bake Layers Of Accessibility Testing Into Your Process — Smashing Magazine","description":"Accessibility experts Kate Kalcevich and Mike Gifford introduce readers to \"layered accessibility testing”, a practice of using a variety of tools and approaches at different stages in the digital product lifecycle to catch accessibility issues early — when it’s easier and cheaper to fix them. ","keywords":""},{"url":"https://gridjs.io/","title":"Grid.js - Advanced JavaScript table plugin","description":"Grid.js is a lightweight JavaScript table plugin that works on all web browsers and devices. Grid.js is open-source and it helps you create advanced tables in seconds!","keywords":"grid, gridjs, grid.js, javascript, javascript table, js table, js grid, jquery, react, table, html, npm, node, angular, vue, typescript"},{"url":"https://uncut.wtf/","title":"Welcome to UNCUT","description":"Libre typeface catalogue, focusing on somewhat contemporary type.","keywords":""},{"url":"https://www.smashingmagazine.com/2021/04/guide-supported-modern-css-pseudo-class-selectors/","title":"A Guide To Newly Supported, Modern CSS Pseudo-Class Selectors — Smashing Magazine","description":"The CSS Working Group Editor’s Draft for [Selectors Level 4](https://drafts.csswg.org/selectors-4/) includes several pseudo-class selectors that already have proposal candidates in most modern browsers. This guide will cover ones that currently have the best support along with examples to demonstrate how you can start using them today!","keywords":""},{"url":"https://amethyst.rs/","title":"Amethyst - The open source, data-driven game engine","description":"Amethyst - The open source, data-driven game engine","keywords":""},{"url":"https://codeadrian.hashnode.dev/the-best-approach-to-lazy-load-images-for-maximum-performance","title":"The best approach to lazy load images for maximum performance","description":"Image lazy loading is one of the more popular approaches of optimizing websites due to the relatively easy implementation and large performance gain. With lazy loading we load images asynchronously, meaning that we load images only when they appear i...","keywords":""},{"url":"https://web.dev/efficiently-load-third-party-javascript/","title":"Efficiently load third-party JavaScript","description":"Learn how to avoid the common pitfalls of using third-party scripts to improve load times and user experience. ","keywords":""},{"url":"https://web.dev/cookie-notice-best-practices/","title":"Best practices for cookie notices","description":"Learn about how cookie notices affect performance, performance measurement, and UX. ","keywords":""},{"url":"https://www.smashingmagazine.com/2021/04/css-overflow-issues/","title":"Overflow Issues In CSS — Smashing Magazine","description":"In this article, we will explore the causes of overflow issues and how to solve them. We will also explore how modern features in the developer tools (DevTools) can make the process of fixing and debugging easier.","keywords":""},{"url":"https://www.smashingmagazine.com/2021/04/complete-guide-measure-core-web-vitals/","title":"An In-Depth Guide To Measuring Core Web Vitals — Smashing Magazine","description":"How are Core Web Vitals measured? How do you know your fixes have had the desired effect and when will you see the results in Google Search Console? Let’s figure it out.","keywords":""},{"url":"https://medium.muz.li/responsive-grid-design-ultimate-guide-7aa41ca7892?gi=22ad469d9597","title":"Responsive Grid Design: Ultimate Guide","description":"Responsive grid helps to maintain consistency and make faster design decisions. Kickstart your responsive grid design process with this quick how-to guide.","keywords":""},{"url":"https://css-tricks.com/a-primer-on-the-different-types-of-browser-storage/","title":"A Primer on the Different Types of Browser Storage","description":"In back-end development, storage is a common part of the job. Application data is stored in databases, files in object storage, transient data in caches…","keywords":""},{"url":"https://www.hsablonniere.com/prevent-layout-shifts-with-css-grid-stacks--qcj5jo/","title":"Prevent layout shifts with CSS grid stacks","description":"Detailed explanation with real examples of a CSS grid technique I used to prevent layout shifts when a component state changes.","keywords":""},{"url":"https://icons.modulz.app/","title":"Radix Icons","description":"A crisp set of 15×15 icons designed by the Modulz team.","keywords":""},{"url":"https://css-tricks.com/comparing-various-ways-to-hide-things-in-css/","title":"Comparing Various Ways to Hide Things in CSS | CSS-Tricks","description":"You would think that hiding content with CSS is a straightforward and solved problem, but there are multiple solutions, each one being unique. Developers","keywords":""},{"url":"https://www.uxbooth.com/articles/designing-user-friendly-data-tables/","title":"Designing User-Friendly Data Tables | UX Booth","description":"","keywords":""},{"url":"https://ui.dev/esmodules/","title":"ES Modules in Depth - ui.dev","description":"In this post you'll learn all the different ways you can export and import values using ES Modules.","keywords":""},{"url":"https://github.com/socketio/socket.io","title":"socketio/socket.io","description":"Realtime application framework (Node.JS server). Contribute to socketio/socket.io development by creating an account on GitHub.","keywords":""},{"url":"https://github.com/jcubic/tagger","title":"jcubic/tagger","description":"Zero Dependency, Vanilla JavaScript Tag Editor. Contribute to jcubic/tagger development by creating an account on GitHub.","keywords":""},{"url":"https://atomiks.github.io/tippyjs/","title":"Tippy.js - Tooltip, Popover, Dropdown, and Menu Library","description":"The complete tooltip, popover, dropdown, and menu solution for the web","keywords":""},{"url":"https://www.colyseus.io/","title":"Multiplayer server | Colyseus: Simple & Fast Multiplayer Game Creation","description":"Colyseus is a multiplayer framework and cloud solution that unlocks the ability to create multiplayer games for millions of game creators worldwide. Open-source, built on Node.js and is the fastest way to create an authoritative multiplayer server. Try Colyseus Arena to host a free multiplayer server in the cloud.","keywords":""},{"url":"https://metascraper.js.org/","title":"metascraper, easily scrape metadata from an article on the web.","description":"easily scrape metadata from an article on the web.","keywords":""},{"url":"https://christianheilmann.com/2021/02/08/sharing-data-between-css-and-javascript-using-custom-properties/","title":" Sharing data between CSS and JavaScript using custom properties | Christian Heilmann","description":"","keywords":""},{"url":"https://www.bram.us/2021/01/29/animating-a-css-gradient-border/","title":"Animating a CSS Gradient Border","description":"Recently, Stephanie Eckles sent out a call to revive the use of CSS border-image. Not to use it with images — which requires a pretty nasty syntax — but to create Gradient Borders in CSS. 🎉 Time to revive an old CSS property! When `border-image` was announced, I was off-put b/c the syntax was so … Continue reading \"Animating a CSS Gradient Border\"","keywords":""},{"url":"https://simonhearne.com/2021/layout-shifts-webfonts/","title":"How to avoid layout shifts caused by web fonts","description":"One of the biggest causes of layout shifts for my clients is late-loading web fonts, let's look at how to optimise them!","keywords":"webperf"},{"url":"https://github.com/alesgenova/post-me","title":"alesgenova/post-me","description":"📩 Use web Workers and other Windows through a simple Promise API - alesgenova/post-me","keywords":""},{"url":"https://runjs.app/","title":"RunJS","description":"RunJS is a modern JavaScript and TypeScript playground, displaying instant results as you type and providing access to Node and browser APIs.","keywords":""},{"url":"https://responsibleweb.app/","title":"Responsible Web Applications","description":"HTML and CSS Tips and Tricks for creating applications that are both responsive and accessible out of the box","keywords":""},{"url":"https://inspyr.io/","title":"inspyr - Handpicked set of beautifully designed SVG icons","description":"Handpicked set of beautifully designed SVG icons","keywords":""},{"url":"https://useparcel.com/","title":"Parcel: The Code Editor Built for Email Development","description":"Parcel streamlines your development workflow to help you rapidly code high-quality emails. Learn more today!","keywords":""},{"url":"https://alesgenova.github.io/concurrent-wasm-workers/","title":"Running Rust in WebAssembly in a Pool of Concurrent Web Workers in JavaScript","description":"\n\n","keywords":""},{"url":"https://www.jonmellman.com/posts/promise-memoization/","title":"Advanced Promise Patterns: Promise Memoization - Blog by Jon Mellman","description":"Memoizing async methods to simplify caching and avoid common race conditions.","keywords":""},{"url":"https://github.com/isaacHagoel/svelte-dnd-action","title":"isaacHagoel/svelte-dnd-action","description":"An action based drag and drop container for Svelte - isaacHagoel/svelte-dnd-action","keywords":""},{"url":"https://www.smashingmagazine.com/2021/01/front-end-performance-2021-free-pdf-checklist/","title":"Front-End Performance Checklist 2021 — Smashing Magazine","description":"Let’s make 2021... fast! An annual front-end performance checklist, with everything you need to know to create fast experiences on the web today, from metrics to tooling and CSS/JavaScript techniques.","keywords":""},{"url":"https://www.websitecarbon.com/","title":"Website Carbon Calculator | How is your website impacting the planet?","description":"The internet consumes a lot of electricity. 416.2TWh per year to be precise. To give you some perspective, that’s more than the entire United Kingdom.","keywords":""},{"url":"https://stimulus.hotwire.dev/","title":"Stimulus: A modest JavaScript framework for the HTML you already have.","description":"Stimulus is a JavaScript framework with modest ambitions. It doesn’t seek to take over your entire front-end—in fact, it’s not concerned with rendering HTML at all. Instead, it’s designed to augment your HTML with just enough behavior to make it shine.","keywords":""},{"url":"https://cableready.stimulusreflex.com/","title":"Welcome","description":"Server-side Ruby making magic happen on the client, in real-time","keywords":""},{"url":"https://github.com/github/view_component","title":"github/view_component","description":"A framework for building reusable, testable & encapsulated view components in Ruby on Rails. - github/view_component","keywords":""},{"url":"https://alistapart.com/article/human-readable-javascript/","title":"Human-Readable JavaScript: A Tale of Two Experts","description":"JavaScript gives us many ways to do things, but deciding which way can be tricky. Laurie Barth gives us a story of two experts who solve this problem in different ways, giving some insight into how…","keywords":""},{"url":"https://docs.stimulusreflex.com/","title":"Welcome","description":"Build reactive applications with the Rails tooling you already know and love","keywords":""},{"url":"https://wormhole.app/","title":"Wormhole - Simple, private file sharing","description":"Wormhole lets you share files with end-to-end encryption and a link that automatically expires.","keywords":""},{"url":"https://moderncss.dev/modern-css-upgrades-to-improve-accessibility/","title":"Modern CSS Upgrades To Improve Accessibility | Modern CSS Solutions","description":"Accessibility is a critical skill for developers doing work at any point in the stack. For front-end tasks, modern CSS provides capabilities we can leverage to make layouts more accessibly inclusive for users of all abilities across any device.","keywords":""},{"url":"https://www.smashingmagazine.com/2021/04/complete-guide-html-email-templates-tools/","title":"A Complete Guide To HTML Email — Smashing Magazine","description":"A complete guide to HTML email templates, tools, resources and guides. Everything you need to know about designing and building HTML Email in 2021.","keywords":""},{"url":"https://css-tricks.com/comparing-the-new-generation-of-build-tools/","title":"Comparing the New Generation of Build Tools","description":"A bunch of new developer tools have landed in the past year and they are biting at the heels of the tools that have dominated front-end development over","keywords":""},{"url":"https://css-tricks.com/html-inputs-and-labels-a-love-story/","title":"HTML Inputs and Labels: A Love Story | CSS-Tricks","description":"Most inputs have something in common — they are happiest with a companion label! And the happiness doesn’t stop there. Forms with proper inputs and labels","keywords":""},{"url":"https://www.sitepoint.com/build-frameworkless-web-app-modern-javascript-web-components/","title":"Build a Web App with Modern JavaScript and Web Components - SitePoint","description":"Web apps don't require a JS framework! Learn how to build a feature-rich, lightweight and dependency-free web app with web components and observables.","keywords":""},{"url":"https://waterfaller.dev/","title":"One-of-a-Kind Core Web Vitals Tool for Technical SEO - Waterfaller","description":"Waterfaller is used by site owners, project managers, and technical SEO professionals to find and fix core web vital issues.","keywords":""},{"url":"https://pnpm.io/","title":"Hello from pnpm | pnpm","description":"Fast, disk space efficient package manager","keywords":""},{"url":"https://www.baseclass.io/guides/string-handling-modern-js","title":"The complete guide to working with strings in modern JavaScript","description":"Everything you need to know about creating, manipulating and comparing strings in JavaScript.","keywords":""},{"url":"https://seanbarry.dev/posts/switch-true-pattern/","title":"Using the Switch(true) Pattern in JavaScript - Seán Barry","description":"The switch true pattern isn't well known but it is incredibly useful. It's not a JavaScript specific pattern, but I use it in almost every single project.","keywords":""},{"url":"https://pustelto.com/blog/css-vs-css-in-js-perf/","title":"Real-world CSS vs. CSS-in-JS performance comparison","description":"Personal website of Tomas Pustelnik. A front-end developer with focus on HTML/CSS, React, performance and accessibility. Founder of Qjub.app and tooling addict.","keywords":""},{"url":"https://www.youtube.com/watch?v=B3uH517XbPs","title":"Mage ASCII Game Engine","description":"In this video we're creating an ASCII game engine that runs in the browser! It should be flexible enough that we can build many different kinds of games in i...","keywords":"javascript, js"},{"url":"https://blog.openreplay.com/hyperapp-is-it-the-lightweight-react-killer/","title":"Hyperapp – Is It the Lightweight 'React Killer'?","description":"React is great but it's not perfect and Hyperapp takes advantage of that, find out how this lesser known framework is better than React","keywords":""},{"url":"https://css-tricks.com/memorize-scroll-position-across-page-loads/","title":"Memorize Scroll Position Across Page Loads","description":"Hakim El Hattab tweeted a really nice little UX enhancement for a static site that includes a scrollable sidebar of navigation.","keywords":""},{"url":"https://www.giftegwuenu.com/how-to-switch-logo-in-dark-mode/","title":"How to Switch Logo in Dark Mode","description":"I share how I was able to switch the color of my logo in dark mode using CSS custom properties.","keywords":"HTML,CSS,JavaScript,Vue,React,Accessibility"},{"url":"https://tablericons.com/","title":"Tabler Icons","description":"800+ Customizable free SVG icons MIT licensed - Tabler Icons. Configurable stroke, color and size.","keywords":""},{"url":"https://squircley.app/","title":"Squircley | SVG Squircle Maker","description":"Create and export beautiful SVG squircles to use in your designs!","keywords":"Squircle, SVG, Generator, Maker, Design, Icon, Shape, UI, HTML, CSS"},{"url":"https://shareon.js.org/","title":"shareon","description":"Lightweight, stylish and ethical share buttons for popular social networks","keywords":"share buttons, sharing, social networks, share via, share on"},{"url":"https://aboutmonica.com/blog/how-to-create-a-github-profile-readme/","title":"How To Create A GitHub Profile README","description":"This article walksthrough how to access GitHub's new profile level README feature","keywords":""},{"url":"https://dmitripavlutin.com/javascript-event-delegation/","title":"A Simple Explanation of Event Delegation in JavaScript","description":"The event delegation is an useful pattern to listen for events on multiple elements using just one event handler.","keywords":""},{"url":"https://medium.com/teads-engineering/generating-uuids-at-scale-on-the-web-2877f529d2a2","title":"Generating UUIDs at scale on the Web","description":"Is it possible to generate a billion unique identifiers per day in the browser? At Teads, we have tried, and the answer is yes - if you ignore bots and bugs. This article describes the experiments we've run and the discoveries we made along the way.","keywords":""},{"url":"https://github.com/fymmot/inclusive-dates","title":"fymmot/inclusive-dates","description":"A human-friendly datepicker. Supports natural language manual input through Chrono.js. Fully accessible with keyboard and screen reader. Contributions welcome! - fymmot/inclusive-dates","keywords":""},{"url":"https://github.com/malinajs/malinajs","title":"malinajs/malinajs","description":"Frontend compiler, inspired by Svelte. Contribute to malinajs/malinajs development by creating an account on GitHub.","keywords":""},{"url":"https://nicolodavis.com/blog/typescript-to-rust/","title":"Moving from TypeScript to Rust / WebAssembly | nicolodavis.com","description":"My experience with both languages.","keywords":""},{"url":"https://www.bitnative.com/2020/07/06/four-ways-to-fetch-data-in-react/","title":"Four Ways to Fetch Data in React – Cory House","description":"","keywords":""},{"url":"https://www.samanthaming.com/tidbits/71-how-to-flatten-array-using-array-flat/","title":"Flatten Array using Array.flat() in JavaScript | SamanthaMing.com","description":"It was always complicated to flatten an array in JS. Not anymore! ES2019 introduced a new method that flattens arrays with Array.flat()...","keywords":""},{"url":"https://github.com/lmammino/financial#readme","title":"lmammino/financial","description":"A Zero-dependency TypeScript/JavaScript financial library (based on numpy-financial) for Node.js, Deno and the browser - lmammino/financial","keywords":""},{"url":"https://fredriknoren.github.io/jsplot/","title":"jsplot","description":"jsplot","keywords":""},{"url":"https://stephaniewalter.design/blog/designing-adaptive-components-beyond-responsive-breakpoints/","title":"Designing Adaptive Components, Beyond Responsive Breakpoints by Stéphanie Walter - UX designer & Mobile Expert.","description":"Designing systems of reusable components that adapt to responsive layouts, containers, work with different content states and adapt to user needs, behaviour and context.","keywords":""},{"url":"https://www.smashingmagazine.com/2020/07/css-techniques-legibility/","title":"Modern CSS Techniques To Improve Legibility — Smashing Magazine","description":"In this article, we cover how we can improve websites legibility using some modern CSS techniques, great new technologies like variable fonts and putting into practise what we learned from doing scientific researches.","keywords":""},{"url":"https://ishadeed.com/article/css-multiple-backgrounds/","title":"Understanding CSS Multiple Backgrounds - Ahmad Shadeed","description":"How to use CSS multiple backgrounds","keywords":"css backgrounds, multiple, background longhand"},{"url":"https://elijahmanor.com/blog/format-js-dates-and-times","title":"Natively Format JavaScript Dates and Times","description":"Modern browsers provide advanced JavaScript date/time formatting with locale and time zone support","keywords":""},{"url":"https://css-tricks.com/lazy-loading-images-in-svelte/","title":"Lazy Loading Images in Svelte","description":"One easy way to improve the speed of a website is to only download images only when they’re needed, which would be when they enter the viewport. This","keywords":""},{"url":"https://whatthefork.is/memoization","title":"What the fork is memoization? ・ Dan’s JavaScript Glossary","description":"Dan’s JavaScript Glossary","keywords":""},{"url":"https://www.smashingmagazine.com/2020/07/introduction-stimulusjs/","title":"An Introduction To Stimulus.js — Smashing Magazine","description":"In this article, Mike Rogers will introduce you to Stimulus, a modest JavaScript framework that complements your existing HTML. By the end, you’ll have an understanding of the premise of Stimulus and why it’s a useful tool to have in your backpack.","keywords":""},{"url":"https://ishadeed.com/article/pixel-perfection/","title":"The State Of Pixel Perfection - Ahmad Shadeed","description":"A walkthrough of the term pixel perfection and if it's still relevant today or not.","keywords":"pixel perfection, css, look and feel"},{"url":"https://www.smashingmagazine.com/2020/07/design-wireframes-accessible-html-css/","title":"Translating Design Wireframes Into Accessible HTML/CSS — Smashing Magazine","description":"In this article, Harris Schneiderman walks you through the process of analyzing a wireframe and making coding decisions to optimize for accessibility.","keywords":""},{"url":"https://webcomponents.dev/blog/all-the-ways-to-make-a-web-component/","title":"All the Ways to Make a Web Component - May 2021 Update","description":"Compare coding style, bundle size and performance of 55 different ways to make a Web Component.","keywords":""},{"url":"https://adamsilver.io/blog/the-trouble-with-mailto-email-links-and-what-to-do-instead/","title":"The trouble with mailto email links and what to do instead – Adam Silver – Designer, London, UK. ","description":"Mailto links are everywhere and yet browsers and operating systems don’t make them easy to use. Learn why that is and what we did about it when we launched Frankly, a joint venture helping teams create clear, accessible and user-centred digital products.","keywords":""},{"url":"https://m.signalvnoise.com/how-we-achieve-simple-design-for-basecamp-and-hey/","title":"How we achieve “simple design” for Basecamp and HEY","description":"Yesterday I got an email asking how we achieve simple designs for Basecamp and HEY, so I hastily tweeted a screenshot of my answer, and a lot of people responded to it. A few folks pointed out that…","keywords":""},{"url":"https://propjockey.github.io/css-media-vars/","title":"css-media-vars from PropJockey","description":"","keywords":""},{"url":"https://betterprogramming.pub/how-i-built-a-rest-api-using-google-sheets-5bbf356b01f0?gi=d772c3839c32","title":"How I Built a REST API Using Google Sheets","description":"Google Sheets is where I keep track of my push-ups. I want to be able to visualize my push-ups, and I decided to build a data visualization chart using my own data. Without further ado, I swiftly…","keywords":""},{"url":"https://www.smashingmagazine.com/2020/07/tiny-desktop-apps-tauri-vuejs/","title":"Creating Tiny Desktop Apps With Tauri And Vue.js — Smashing Magazine","description":"Tauri is a toolchain for creating small, fast, and secure desktop apps from your existing HTML, CSS, and JavaScript. In this article, Kelvin explains how Tauri plays well with the progressive framework Vue.js by integrating both technologies in bundling an example web app called **nota** as a native application.","keywords":""},{"url":"https://nicolodavis.com/blog/typescript-to-rust/","title":"Moving from TypeScript to Rust / WebAssembly | nicolodavis.com","description":"My experience with both languages.","keywords":""},{"url":"https://css-tricks.com/building-serverless-graphql-api-in-node-with-express-and-netlify/","title":"Building Serverless GraphQL API in Node with Express and Netlify","description":"I’ve always wanted to build an API, but was scared away by just how complicated things looked. I’d read a lot of tutorials that start with “first, install","keywords":""},{"url":"https://blog.tailwindcss.com/building-the-tailwind-blog","title":"Building the Tailwind Blog with Next.js – Tailwind CSS","description":"One of the things we believe as a team is that everything we make should be sealed with a blog post. Forcing ourselves to write up a short announcement post for every project we work on acts as a built-in quality check, making sure that we never call a project \"done\" until we feel comfortable telling the world it's out there. The problem was that up until today, we didn't actually have anywhere to publish those posts!","keywords":""},{"url":"https://riccardoscalco.it/textures/","title":"Textures.js","description":"A JavaScript Library for creating SVG patterns","keywords":"svg, patterns, javascript, d3, textures"},{"url":"https://www.zachleat.com/web/speedlify/","title":"Use Speedlify to Continuously Measure Site Performance—zachleat.com","description":"A post by Zach Leatherman (zachleat)","keywords":""},{"url":"https://keyframes.app/","title":"Keyframes.app | CSS Toolbox","description":"Keyframes helps you write better CSS with a suite of tools to create CSS @Keyframe animations, box shadows, colors, & more","keywords":"Keyframes, animation, css, css3, transform, translate, html, javascript, web design, web developemtn, web animations, editor, generator"},{"url":"https://www.taniarascia.com/understanding-template-literals/","title":"Understanding Template Literals in JavaScript","description":"This article was originally written for DigitalOcean. Introduction The 2015 edition of the ECMAScript specification (ES6) added template…","keywords":""},{"url":"https://codegolf.stackexchange.com/questions/2682/tips-for-golfing-in-javascript","title":"Tips for golfing in JavaScript","description":"What general tips do you have for golfing in JavaScript? I'm looking for ideas that can be applied to code golf problems in general that are at least somewhat specific to JavaScript (e.g. \"remove ","keywords":""},{"url":"https://medium.com/@abulka/todomvc-implemented-using-a-game-architecture-ecs-88bb86ea5e98","title":"TodoMVC implemented using a game architecture — ECS.","description":"It turns out that the answer is yes! Whilst ECS is most commonly used in building games, it can also be used for building a traditional web “form” style application like TodoMVC. However you will…","keywords":""},{"url":"https://github.com/jamesroutley/24a2","title":"jamesroutley/24a2","description":"🏵 An ultra-minimalist game engine. Contribute to jamesroutley/24a2 development by creating an account on GitHub.","keywords":""},{"url":"https://www.w3.org/TR/wai-aria-practices-1.1/examples/combobox/aria1.1pattern/listbox-combo.html","title":"ARIA 1.1 Combobox with Listbox Popup Examples | WAI-ARIA Authoring Practices 1.1","description":"","keywords":""},{"url":"https://dev.to/mpodlasin/3-most-common-mistakes-when-using-promises-in-javascript-oab","title":"3 most common mistakes when using Promises in JavaScript","description":"Promises rule JavaScript. Even nowadays, with introduction of async/await, they are still an obligato... Tagged with javascript.","keywords":"javascript, software, coding, development, engineering, inclusive, community"},{"url":"https://daybrush.com/moveable/","title":"Moveable is Draggable! Resizable! Scalable! Rotatable! Warpable! Pinchable!","description":"Moveable is Draggable! Resizable! Scalable! Rotatable! Warpable! Pinchable!","keywords":""},{"url":"https://teenyicons.com/","title":"Teenyicons — Tiny minimal 1px icons","description":"An elegant icon set by Anja van Staden with more than a thousand icons.","keywords":""},{"url":"https://medium.com/nmc-techblog/introducing-the-async-cookie-store-api-89cbecf401f","title":"Introducing: The Async Cookie Store API","description":"Are you sick and tired of weird ways to get cookies fromdocument.cookie ? Hate it that you don’t know whether the cookie you set was actually saved or not? Introducing: Cookie Store API, available on…","keywords":""},{"url":"https://css-tricks.com/how-to-create-a-realistic-motion-blur-with-css-transitions/","title":"How to Create a Realistic Motion Blur with CSS Transitions","description":"Before we delve into making a realistic motion blur in CSS, it’s worth doing a quick dive into what motion blur is, so we can have a better idea of what","keywords":""},{"url":"https://blog.sapegin.me/all/accessibility-testing/","title":"The most useful accessibility testing tools and techniques","description":"Shipping accessible features is as important for a frontend developer as shipping features without bugs, learn about tools and techniques that will help you achieve that.","keywords":""},{"url":"https://www.thegoodlineheight.com/","title":"The good line-height","description":"","keywords":""},{"url":"https://secretgeek.github.io/html_wysiwyg/html.html","title":"This page is a truly naked, brutalist html quine.","description":"","keywords":""},{"url":"https://lea.verou.me/2020/10/the-var-space-hack-to-toggle-multiple-values-with-one-custom-property/","title":"The -​-var: ; hack to toggle multiple values with one custom property – Lea Verou","description":"","keywords":""},{"url":"https://web.dev/file-system-access/","title":"The File System Access API: simplifying access to local files","description":"The File System Access API enables developers to build powerful web apps that interact with files on the user's local device, like IDEs, photo and video editors, text editors, and more. After a user grants a web app access, this API allows them to read or save changes directly to files and folders on the user's device.","keywords":""},{"url":"https://www.smashingmagazine.com/2020/10/using-webxr-with-babylonjs/","title":"Using WebXR With Babylon.js — Smashing Magazine","description":"In this overview of WebXR technologies and the Babylon.js framework, we’ll inspect the underpinnings of WebXR and the most important aspects of the WebXR Device API before turning our attention to Babylon.js.","keywords":""},{"url":"https://github.com/oguzeroglu/Ego","title":"oguzeroglu/Ego","description":"A lightweight decision making library for game AI. - oguzeroglu/Ego","keywords":""},{"url":"https://polypane.app/css-3d-transform-examples/","title":"Beautiful CSS 3D Transform Perspective Examples in 2020 | Polypane, The Browser for Developers and Designers","description":"Beautiful CSS 3D transform examples using a single div that you can copy with one click!","keywords":""},{"url":"https://www.goodweb.design/tags/tiered","title":"Good Web Design","description":"Reference the best landing page design patterns","keywords":""},{"url":"https://dev.to/reedbarger/the-react-cheatsheet-for-2020-real-world-examples-4hgg","title":"The React Cheatsheet for 2020 📄‬ (+ Real-World Examples)","description":"I've put together for you an entire visual cheatsheet of all of the concepts and skills you need to m... Tagged with react, javascript, beginners, career.","keywords":"react, javascript, beginners, career, software, coding, development, engineering, inclusive, community"},{"url":"https://elderguide.com/tech/elderjs/","title":"Elder.js: A Svelte Framework and Static Site Generator","description":"Elder.js is an opinionated Svelte framework and static site generator used for building blazing fast, user friendly websites.","keywords":""},{"url":"https://gitlab.com/smolpxl/smolpxl","title":"smolpxl / smolpxl","description":"Write retro pixelated games in JavaScript. Play at https://smolpxl.artificialworlds.net . Learn at https://www.artificialworlds.net/blog/2020/10/11/code-your-first-game-snake-in-javascript-on-raspberry-pi/","keywords":""},{"url":"https://github.com/balazsbotond/urlcat","title":"balazsbotond/urlcat","description":"A URL builder library for JavaScript. Contribute to balazsbotond/urlcat development by creating an account on GitHub.","keywords":""},{"url":"https://github.com/MikeMcl/big.js","title":"MikeMcl/big.js","description":"A small, fast JavaScript library for arbitrary-precision decimal arithmetic. - MikeMcl/big.js","keywords":""},{"url":"https://jakearchibald.com/2020/avif-has-landed/","title":"AVIF has landed","description":"AVIF is the first browser image format we've had in 10 years. Let's see how it performs…","keywords":""},{"url":"https://ninjarockstar.dev/css-hex-grids/","title":"Building a hexagonal grid using CSS grid","description":"CSS grid goes hexagonal.","keywords":""},{"url":"https://blog.bitsrc.io/8-methods-to-search-javascript-arrays-fadbce8bea51?gi=1f399832d7ba","title":"8 Methods to Search JavaScript Arrays","description":"JavaScript has a handful of methods to help search and filter Arrays. The methods vary depending upon if you would like to search using an item or a predicate as well as whether you need to return…","keywords":""},{"url":"https://github.com/ionic-team/stencil","title":"ionic-team/stencil","description":"A toolchain for building scalable, enterprise-ready component systems on top of TypeScript and Web Component standards. Stencil components can be distributed natively to React, Angular, Vue, and traditional web developers from a single, framework-agnostic codebase. - ionic-team/stencil","keywords":""},{"url":"https://demo.greenroots.info/categories/web-apis/","title":"Web Apis","description":"The demo lab helps you navigating through demos of several projects with \n guided information","keywords":""},{"url":"https://blog.tensorflow.org/2020/08/introducing-danfo-js-pandas-like-library-in-javascript.html?linkId=98080391","title":"Introducing Danfo.js, a Pandas-like Library in JavaScript","description":"Danfo.js is an open-source, JavaScript library that provides high-performance, intuitive, and easy-to-use data structures for manipulating and processing structured data. Danfo.js is heavily inspired by the Python Pandas library and provides a similar interface/API. This means that users familiar with the Pandas API and know JavaScript can easily pick it up.","keywords":""},{"url":"https://2ality.com/2020/08/minimal-react.html","title":"Minimal React: getting started with the frontend library","description":"","keywords":""},{"url":"https://github.com/ka-weihe/fastest-levenshtein","title":"ka-weihe/fastest-levenshtein","description":"The fastest implementation of Levenshtein distance in JS/TS. - ka-weihe/fastest-levenshtein","keywords":""},{"url":"https://github.com/edwinm/carbonium","title":"edwinm/carbonium","description":"One kilobyte library for easy manipulation of the DOM - edwinm/carbonium","keywords":""},{"url":"https://web.dev/sign-in-form-best-practices/","title":"Sign-in form best practices","description":"Use cross-platform browser features to build sign-in forms that are secure, accessible and easy to use.","keywords":""},{"url":"https://elad.medium.com/why-css-logical-properties-arent-ready-for-use-c102925a5cba","title":"Why CSS Logical Properties Aren’t Ready for Use!","description":"The new CSS logical properties module is one of the most important developments to have come to CSS in recent years. This module enables us to support all the directions that human languages are…","keywords":""},{"url":"https://ryersondmp.github.io/sa11y/","title":"Sa11y - accessibility quality assurance assistant - Ryerson University","description":"Sa11y is an accessibility quality assurance tool that visually highlights common accessibility and usability issues.","keywords":""},{"url":"https://kilianvalkhof.com/2020/javascript/supercharging-input-type-number/","title":"Supercharging | Kilian Valkhof","description":"The number input type provides a nice way for to deal with numbers. You can set bounds with the min and max attributes and users can press up and down to go add or remove 1, or if you add the step attribute, go up or down by a step. But what if we want […]","keywords":""},{"url":"https://github.com/Debdut/omg-curry","title":"Debdut/omg-curry","description":"Curry All Code. Contribute to Debdut/omg-curry development by creating an account on GitHub.","keywords":""},{"url":"https://kickstand-ui.com/","title":"Kickstand UI","description":"Kickstand UI is a Design System built using Web Components so you can use it Everywhere!","keywords":""},{"url":"https://css-tricks.com/focus-management-and-inert/","title":"Focus management and inert","description":"Many forms of assistive technology use keyboard navigation to understand and take action on screen content. One way of navigating is via the Tab key. You","keywords":""},{"url":"https://web.dev/min-max-clamp/","title":"min(), max(), and clamp(): three logical CSS functions to use today","description":"Min, max, and clamp provide some powerful CSS capabilities that enable more responsive styling with fewer liens of code. This post goes over how to control element sizing, maintain proper spacing, and implement fluid typography using these well-supported CSS math functions.","keywords":""},{"url":"https://addyosmani.com/blog/preload-hero-images/","title":"Preload late-discovered Hero images faster","description":"If you are optimizing Largest Contentful Paint, preload can be a game-changer for speeding up late-discovered hero images and resources, loaded via JavaScript.","keywords":""},{"url":"https://yjs.dev/","title":"Yjs Shared Editing","description":"","keywords":""},{"url":"https://github.com/relm-us/svelt-yjs","title":"relm-us/svelt-yjs","description":"A library for your Svelte app that lets you build Svelte stores from Yjs types. - relm-us/svelt-yjs","keywords":""},{"url":"https://www.youtube.com/playlist?list=PL8bMgX1kyZThM1sbYCoWdTcpiYysJsSeu","title":"Svelte Summit 2020","description":"Share your videos with friends, family, and the world","keywords":"video, sharing, camera phone, video phone, free, upload"},{"url":"https://sveltelab.app/","title":"Svelte Lab","description":"Page Description","keywords":""},{"url":"https://github.com/hjalmar/svelte-electron-boilerplate","title":"hjalmar/svelte-electron-boilerplate","description":"Ready to go electron configured boilerplate. Contribute to hjalmar/svelte-electron-boilerplate development by creating an account on GitHub.","keywords":""},{"url":"https://github.com/Budibase/budibase","title":"Budibase/budibase","description":"Budibase is an open-source low code platform that helps IT professionals build, automate and self-host internal tools in minutes 🚀 - Budibase/budibase","keywords":""},{"url":"https://jakedowsmith.studio/","title":"Jake Dow-Smith Studio — A Website Design Studio","description":"Working with forward-thinking creative companies and individuals to create alternative websites for a distracted generation.","keywords":""},{"url":"https://laurelschwulst.com/","title":"Laurel Schwulst","description":"","keywords":""},{"url":"https://supabase.io/","title":"The Open Source Firebase Alternative | Supabase","description":"The Open Source Alternative to Firebase.","keywords":""},{"url":"https://github.com/Heydon/watched-box","title":"Heydon/watched-box","description":"Contribute to Heydon/watched-box development by creating an account on GitHub.","keywords":""},{"url":"https://github.com/Heydon/inclusive-design-checklist","title":"Heydon/inclusive-design-checklist","description":"Aims to be the biggest checklist of inclusive design considerations ever - Heydon/inclusive-design-checklist","keywords":""},{"url":"https://github.com/Heydon/bruck","title":"Heydon/bruck","description":"A prototyping system built with web components and the Houdini Paint API - Heydon/bruck","keywords":""},{"url":"https://mutable.gallery/","title":"The Mutable Gallery","description":"A gallery of generative art by Heydon Pickering","keywords":""},{"url":"https://buttondown.email/","title":"Buttondown","description":"Buttondown is the easiest way to start, write, and grow your newsletter.","keywords":""},{"url":"https://getkap.co/","title":"Kap - Capture your screen","description":"An open-source screen recorder built with web technology","keywords":"Kap,capture,record,screen,aspect,ratio,HD,FPS,60FPS"},{"url":"https://supabase.io/","title":"The Open Source Firebase Alternative | Supabase","description":"The Open Source Alternative to Firebase.","keywords":""},{"url":"https://github.com/localForage/localForage","title":"localForage/localForage","description":"💾 Offline storage, improved. Wraps IndexedDB, WebSQL, or localStorage using a simple but powerful API. - localForage/localForage","keywords":""},{"url":"https://thegallery.io/","title":"The Gallery – Minimal Websites","description":"A curated collection of minimal websites. Curated by Manu (manuelmoreale.com).","keywords":""},{"url":"https://mnmll.ist/","title":"mnmllist → listing all things minimal","description":"a curated list of all things related to minimalism, from product design to books to art installations","keywords":""},{"url":"https://docs.imgix.com/tutorials/responsive-images-client-hints","title":"Next-Generation Responsive Images with Client Hints | imgix Documentation","description":"Automate your responsive design with imgix’s Client Hints support","keywords":""},{"url":"https://www.spirit.fish/","title":"Spirit Fish: A futuristic environment for browser-based apps","description":"Super charge your frontend projects with instant deployments, framework agnostic rendering, and powerful content delivery. Free to get started.","keywords":""},{"url":"https://negative.sanctuary.computer/","title":"Studio Carbon Negative • Sanctuary Computer","description":"Our earth is on fire; we're taking stock. Everything we build causes damage - do we deserve to exist?","keywords":""},{"url":"https://ahrefs.com/","title":"Ahrefs - SEO Tools & Resources To Grow Your Search Traffic","description":"You don't have to be an SEO pro to rank higher and get more traffic. Join Ahrefs – we're a powerful but easy to learn SEO toolset with a passionate community.","keywords":""},{"url":"https://www.smashingmagazine.com/2021/03/complete-guide-accessible-front-end-components/","title":"A Complete Guide To Accessible Front-End Components — Smashing Magazine","description":"An up-to-date collection of accessible front-end components: accordions, form styles, dark mode, data charts, date pickers, form styles, navigation menu, modals, radio buttons, \"skip\" links, SVGs, tabs, tables, toggles and tooltips.","keywords":""},{"url":"https://www.sarasoueidan.com/blog/accessible-text-labels/","title":"Accessible Text Labels For All","description":"Welcome to Sara Soueidan’s Web site.","keywords":"freelancer, front-end web developer, HTML, CSS, SVG, UI, UX, a11y, Lebanon"},{"url":"https://calibreapp.com/blog/css-performance","title":"How to Improve CSS Performance | Calibre ","description":"Learn the most common speed issues caused by CSS and how to avoid them.","keywords":""},{"url":"https://www.voorhoede.nl/en/blog/why-skip-links-are-important-for-accessibility/","title":"Why skip-links are important for accessibility","description":"Skip-links play an important role in making a website accessible for everybody. Here I point out why and how you can implement them consistently.","keywords":""},{"url":"https://github.com/luruke/aladino","title":"luruke/aladino","description":"🧞‍♂️ Your magic WebGL carpet. Contribute to luruke/aladino development by creating an account on GitHub.","keywords":""},{"url":"https://www.fontshare.com/","title":"fontshare","description":"Fontshare is a free fonts service from ITF, making quality fonts accessible to all. It’s a growing collection of professional grade fonts that are 100% free for commercial and personal use.","keywords":""},{"url":"https://www.smashingmagazine.com/2021/03/complete-guide-accessible-front-end-components/","title":"A Complete Guide To Accessible Front-End Components — Smashing Magazine","description":"An up-to-date collection of accessible front-end components: accordions, form styles, dark mode, data charts, date pickers, form styles, navigation menu, modals, radio buttons, \"skip\" links, SVGs, tabs, tables, toggles and tooltips.","keywords":""},{"url":"https://kit.svelte.dev/","title":"SvelteKit • The fastest way to build Svelte apps","description":"SvelteKit is the official Svelte application framework","keywords":""},{"url":"https://www.smashingmagazine.com/2018/05/css-custom-properties-strategy-guide/","title":"A Strategy Guide To CSS Custom Properties — Smashing Magazine","description":"Dynamic properties provide opportunities for new creative ideas, but also the potential to add complexity to CSS. To get the most out of them, we might need a strategy for how we write and structure CSS with custom properties.","keywords":""},{"url":"https://css-tricks.com/patterns-for-practical-css-custom-properties-use/","title":"Patterns for Practical CSS Custom Properties Use","description":"I've been playing around with CSS Custom Properties to discover their power since browser support is finally at a place where we can use them in our","keywords":""},{"url":"https://frasersequeira.medium.com/building-a-serverless-api-usage-tracker-on-aws-2002e82503c3","title":"Building a Serverless API Usage tracker on AWS","description":"Supported EventBridge Targets https://docs.aws.amazon.com/eventbridge/latest/userguide/eventbridge-targets.html The event bus in us-east-1(N.Virginia) supports 10K PutEvents per second. A bus can…","keywords":""},{"url":"https://wasi.dev/","title":"WASI | ","description":"","keywords":""},{"url":"https://github.com/hyperium/hyper","title":"hyperium/hyper","description":"An HTTP library for Rust. Contribute to hyperium/hyper development by creating an account on GitHub.","keywords":""},{"url":"https://github.com/seanmonstar/reqwest","title":"seanmonstar/reqwest","description":"An easy and powerful Rust HTTP Client. Contribute to seanmonstar/reqwest development by creating an account on GitHub.","keywords":""},{"url":"https://github.com/seanmonstar/warp","title":"seanmonstar/warp","description":"A super-easy, composable, web server framework for warp speeds. - seanmonstar/warp","keywords":""},{"url":"https://noti.st/jensimmons/h0XWcf","title":"Everything You Know About Web Design Just Changed by Jen Simmons","description":"2017 saw a sea change in web layout, one that few of us have truly come to grips with. We’re standing at the threshold of an entirely new era in digital design—one in which, rather than hacking layouts together, we can actually describe layouts directly. The benefits will touch everything from prototyping to custom art direction to responsive design. In this visionary talk, rooted in years of practical experience, Jen will show you how to understand what’s different, learn to think through multiple stages of flexibility, and let go of pixel constraints forever.","keywords":""},{"url":"https://www.youtube.com/c/LayoutLand","title":"Layout Land","description":"Learn what's now possible in graphic design on the web — layout, CSS Grid, and more. A series for web designers and developers, created by Jen Simmons.","keywords":"\"Jen Simmons\" \"Developer Tools\" DevTools \"Developer Relations\" CSS \"CSS Grid\" Layout \"Graphic design\" Flexbox Fonts \"Resilient web design\" \"Resilient CSS\" \"c..."},{"url":"https://labs.jensimmons.com/","title":"Web Design Experiments by Jen Simmons","description":"Experiments demonstrating CSS Grid, and what's now possible in graphic design on the web.","keywords":""},{"url":"https://phiilu.com/password-protect-your-vercel-site-with-cloudflare-workers","title":"Password protect your (Vercel) site with Cloudflare Workers","description":"Use Cloudflare Workers to simply add Basic HTTP authentication to any of your websites even if they are hosted on a service like Vercel or Netlify.","keywords":"javascript, serverless, cloudflare workers"},{"url":"https://www.oreilly.com/library/view/getting-started-with/9781449339791/ch01.html","title":"Getting Started with Dwarf Fortress","description":"Chapter 1. Introduction Dwarf Fortress is a freeware game developed by Bay 12 Games for Windows, Linux, and Mac OS X-based computers. It has been in development since 2002 and … - Selection from Getting Started with Dwarf Fortress [Book]","keywords":""},{"url":"https://lunrjs.com/","title":"Lunr: A bit like Solr, but much smaller and not as bright","description":"","keywords":""},{"url":"https://oauth1.wp-api.org/docs/basics/Signing.html","title":"Signing Requests | OAuth1","description":"","keywords":""},{"url":"https://fusejs.io/","title":"What is Fuse.js? | Fuse.js","description":"Lightweight fuzzy-search library, in JavaScript","keywords":""},{"url":"https://matthewdaly.co.uk/blog/2019/02/20/searching-content-with-fuse-dot-js/","title":"Searching Content With Fuse.js - Matthew Daly's Blog","description":"I'm a web developer in Norfolk. This is my blog...","keywords":"javascript,search,json"},{"url":"https://1loc.dev/#convert-cookie-to-object","title":"1 LOC - Favorite JavaScript utilities in single line of code","description":"Favorite JavaScript utilities in single line of code","keywords":""}] \ No newline at end of file diff --git a/src/data/texts/index.json b/src/data/texts/index.json index ebfac36..0c0b4d2 100644 --- a/src/data/texts/index.json +++ b/src/data/texts/index.json @@ -1 +1 @@ -[{"meta":{"title":"Factorial! Race!","description":"

Finding an upper limit on factorials in JavaScript

\n","date":"2021.06.07","slug":"factorial-race","collection":"texts","timestamp":1623049200000},"content":"

For the past while now, I've been tinkering on a side project that builds and graphs arbitrary probability distributions created by dice rolling outcomes. It's extremely niche and dorky, but it's a been a really fun way to explore both product design development and new concepts in math and programming that have otherwise never presented themselves during my career.

\n

This is an article about the second bit.

\n

One of the interesting things that I discovered early on was that when adding the ability to multiply and reduce dice rather than just multiply or reduce dice (ie; roll 1d6, roll the resulting number of dice, on a 4 or higher roll again, sum the total results. Complicated!) the distributions are not normal, which means in order to actually graph the distribution we need to calculate every possible outcome. Not a big deal, since computers are good at this sort of stuff!

\n

However, I quickly discovered an upper bound: the calculation requires working with factorials. When the factorials get big, JavaScript gives up and returns Infinity. This is because there is a maximum limit to the size of a double-precision floating-point number that JS uses for the Number type. Wow, this got both mathy and programmery really quickly.

\n

This gave us an upper bound of 170!, since the rest of the distribution calculations don't like it when you pass them Infinity.

\n
const factorial = (x) => x > 1 ? x * factorial(x - 1) : 1\nfactorial(170) // 7.257415615307994e+306\nfactorial(171) // Infinity\n
\n

Lucky for us, JavaScript has implemented a Big Integer type for integers that are … well … big! I was able to refactor my factorial function to use BigInts.

\n
const big_factorial = (x) => x > 1 ? x * big_factorial(x - BigInt(1)) : BigInt(1)\nbig_factorial(BigInt(171)) // 1241018070217667823424840524103103992616605577501693185388951803611996075221691752992751978120487585576464959501670387052809889858690710767331242032218484364310473577889968548278290754541561964852153468318044293239598173696899657235903947616152278558180061176365108428800000000000000000000000000000000000000000n\n
\n

So what's our new upper bound? We can handle 170! easily, how high can we go? 1,000!? 10,000!?

\n
big_factorial(BigInt(1000)) // 402387260…0n\nbig_factorial(BigInt(10000)) // RangeError: Maximum call stack size exceeded\n
\n

Turns out 1,000! is a walk in the park. 10,000! gets a little more interesting! The error that returns on the function is about too much recursion. We're calling big_factorial from big_factorial ten thousand times and the browser thinks this means something is wrong, so it bails out on the process.

\n

So, what if we refactor our recursive big_factorial to use a loop?

\n
const big_fast_factorial = (x) => {\n  let r = BigInt(1);\n  for (let i = BigInt(2); i <= x; i++)\n    r = r * i;\n  return r;\n}\nbig_fast_factorial(BigInt(10000)) // 284625…0n in about 70ms\n
\n

10,000! is fast! We can get reliably get the result of that in less than 100ms. And since our loop will run as long as it needs to, our upper bound should now be based on compute and return time, rather than type errors or browser guardrails. Lets see what we can do now:

\n
big_fast_factorial(BigInt(20000)) // ~300ms\nbig_fast_factorial(BigInt(30000)) // ~650ms\n// …\nbig_fast_factorial(BigInt(90000)) // ~7238ms\nbig_fast_factorial(BigInt(100000)) // ~9266ms\n
\n

Things … start to get slow above 30 or 40 thousand factorial. Every additional ten thousand to our initial number adds more and more time to the compute function. Im sure theres some fancy O(n) complexity notation to express this, but I don't really want to figure that out. It's too slow to use an in a UI above say, 50,000!.

\n

Turns out tho, even mathematicians don't really calculate factorials this big. They use Stirlings' Approximation instead, since it's faster and "good enough". It looks sort of like this:

\n
𝑒ln(𝑛!) = 𝑛!\nwhere\nln(𝑛!) = ∑𝑛𝑘=1(ln𝑛)\n
\n

It would be pretty cool to do this in JavaScript! And personally, I love "good enough". I've already got a handy function for running Big Sigma calculations:

\n
const new_arr = (s,n) => {\n  if (s < 0 || n < 0) { return []}\n  return Array.from(Array(n+1).keys()).filter(v => v > s-1)\n}\n\nconst Σ = (min, max, fn) => new_arr(min, max).reduce((acc, val) => fn(val) + acc, 0)\n
\n

So lets try this out:

\n
const log = (x) => Math.log(x)\nMath.exp(Σ(1, 1000000, log)) // Infinity\n
\n

Oh no! The end result of our 1,000,000! function is still Infinity. Thats because one million factorial is … very big. It could still fit into a BigInt, but then we have another problem: we cant run Math functions on the BigInt type. And we can't rewrite the functions to use BigInts because the type is, by definition, only for integers. and 𝑒 is definitely not an integer. Even a math library like math.js has the same issues around typing, despite trying to account for it.

\n

Naturally, this leads to a simple proposal: FaaStorials! Fast Factorials as a Service! Since factorials are immutable, it should be possible to store the first 1,000,000 or so in a database, and provide an API for querying and returning them. Even a slow network request would be faster than computing the factorial locally. It should be possible to crunch (slowly) all the factorials, and write them to a database for retrieval on demand. I wrote this function and got about 7,000 rows written before I realized it would probably be expensive.

\n

According to my rough estimating, 1,000,000! would send a response that weights about 700kb, and the whole database would be in the neighborhood of 350gb. This would cost me about $80 a month to store, maybe $100 a month to pay for the requests as well. I pulled the plug on the script.

\n

As with many problems, the upper bound ends up being defined by time and money, the end!

\n"},{"meta":{"title":"Headless CMS; A Brief Introduction","description":"

What is a Headless CMS, and how can it be useful for building websites and apps?

\n","date":"2021.05.28","slug":"headless-cms-introduction","collection":"texts","timestamp":1622185200000},"content":"

A Content Management System is a central aspect of any web project - even projects where you would never think of as "having" or "using" a CMS. In these sorts of projects the content management system is the codebase, and the strategy for managing content is identical to the process for managing code. This, obviously, is not ideal for anyone who wants to edit content and not code, or in uncomfortable in the workflows that developers rely on for our day-to-day practice — like Git. The other end of the spectrum of CMS is something like Squarespace — the code is the content. Not ideal if you want to edit code and not content. A traditional CMS like Wordpress attempts to split the difference; the CMS controls the code and the content, but makes an attempt to keep them at least a little independant by storage content in a database and providing an admin interface for editing and authoring that content.

\n

All of the above approach the problem of content management with the same set of assumptions: the CMS is responsible for taking the content, combining it with the code, and assembling and delivering the entire website. Both parts are coupled together, with the CMS rendering the "head" of the website or app.

\n

In the past few years with rise of build-time generated and static sites, a new approach to this problem has been articulated and built by a number of companies. The basic idea underlying this new approach is that the CMS should only be in charge of content, and interacted with like any other API. This decouples the code from the content, and removes the CMS from any responsibility of interacting with the code at all. This style of CMS does not render anything out to the web, and is thus called a headless CMS. In short, a headless CMS has no website out of the box. This means there is not a default theme (there are no themes!) to configure, there is no classic "blog" visuals or interfaces to configure and map too content. A blog is just one of many things a headless CMS can be used for.

\n

A headless CMS has a number of advantages: the first among them is that the product in charge of managing content can focus solely on managing content, and be very, very good at authoring, creating, and editing without needing to also be a good tool for building web apps with. The second biggest advantage is it provides the development team with complete freedom to meet the real-world business use cases associated with the project without relying on the CMS to support those use cases.

\n

Search Engine Optimization is an excellent example of these two characteristics at work – we are completely free to implement any SEO improvements without any support from the CMS, because the CMS doesn't do anything but manage content. SEO tags and page metadata can become content like any other content, and the codebase of the web app is responsible for rendering the actual website as it goes over the wire and gets consumed by browsers. Instead of relying on Wordpress Plugins or trusting that Squarespace is following best practices, all of the implementation details over your SEO strategy is completely in your teams control — just like any project without a CMS integration — while the content itself is entirely in your strategy or marketing teams control.

\n

A Headless CMS exposes content via an API, and that's all it does.

\n

Contentful provides a set of client libraries that allow content to be consumed in a developers language of choice, meaning that the technologies or systems used to render your app can be selected by any set of criteria at hand, rather than being forced into a decision by the CMS – if you use Wordpress you're writing your app in PHP within Wordpress. With Contentful you can write your app in Go if you want and live your best life.

\n

Below is a quick overview of using the Contentful JavaScript SDK to access content in the Headless CMS. It returns JSON and can be used at run-time or build-time to add content to a website or app:

\n
const contentful = require('contentful')\n\nconst client = contentful.createClient({\n  space: <space-id>,\n  accessToken: <access-token>,\n  host: <host>\n})\n
\n

The client provides a set of methods for interacting and querying the content database;

\n
 getSpace: async ƒ getSpace(),\n  getContentType: async ƒ getContentType(),\n  getContentTypes: async ƒ getContentTypes(),\n  getEntry: async ƒ getEntry(),\n  getEntries: async ƒ getEntries(),\n  getAsset: async ƒ getAsset(),\n  getAssets: async ƒ getAssets(),\n  createAssetKey: async ƒ createAssetKey(),\n  getLocales: async ƒ getLocales(),\n  parseEntries: ƒ parseEntries(),\n  sync: async ƒ sync()\n
\n

You can use the getEntries function to get all the entries available:

\n
client.getEntries()\n  .then(entries => {\n    console.log(entries)\n  })\n
\n

Or query on metadata or content:

\n
  client.getEntries({\n    content_type: 'lesson',\n    'fields.slug[in]': 'content-management'\n  }).then(entries => {\n    console.log(entries)\n  })\n
\n

Contentful in particular is interesting because one of the fields you can add to your entries is a reference to other entries. This gives the information architecture model some pretty amazing abilities to structure and enable pretty much any sort of content strategy you want to build. Some simple key-value pairs for getting strings to complicated, nested, conditional data structures.

\n

And from the code's perspective, it's all just JSON!

\n

For an example of how one can write components that consume this general API data, I've put together a small sample of how to create a component that's defined by JSON structures, and how handling different configuration keys alongside content strings can create a powerful way to integrate with a Headless CMS like Contentful. Check out the demo on Glitch.

\n"},{"meta":{"title":"Towards an Ethical Web Development","description":"

Thinking about what it means for an industry to determine a moral standard of practice.

\n","date":"2020.09.01","slug":"an-ethical-web-development","collection":"texts","timestamp":1598943600000},"content":"

Since the beginning of this summer, with everything that’s descending upon us with Covid-19 and the Black Lives Matter uprising, it feels like we are living through a moment of moral accounting. In Portland, antifa is in the street in running battles with secret police, exploited workers are speaking up about the realities of our treasured restaurant industry, business owners are shutting down and pulling a disappearing act instead of facing accountability for their behavior. This has me thinking about my industry, which we all know has massive problems around racism, techno-fascism, and robber-baron level exploitation. We’re still dealing with people who read Snow Crash and Neuromancer and think those books were descriptions of Utopias.

\n

There are a lot of people doing hard work right now to address these issues in the industry, working to identify how we can — as businesses — move toward a more just system of working, how we need to avoid baking our prejudices in to the AIs we build, and how the physical underpinning of the internet is built on an exploitative and extractive logic of global capital. This is all good and necessary work. It makes me think though, about if there are distinctions between the craft and practice of web development and the business logic and drive the industry. The industry vs the practice - as in the technical skill of painting vs the economic system of patronage. Clearly they are related, and clearly our conception of painting has always been tied to the economic structures that make it a reality as a profession, but is there a way to think about an Ethical Web Development? It would be tied to running an ethical business, and necessarily need to be supporting by an ethical economic system, but how could we articulate what it would look like the perform the craft and practice of web development ethically? What would an antiracist and antifascist web development practice look like?s

\n

Recently we were visiting my parents in Corvallis, and as it happens my moms partner is Michal Nelson - a moral philosopher who specializes in ecological and environmental ethics. So I asked him if there were any frameworks for determining if a given industry (his discipline focuses on forestry and resource management for example) was acting “ethically”. How does an industry set its own standard of ethical behavior? He explained that acting “ethically” is essentially just staying internally consistent to a set of values that you’ve articulated — and in this way being “ethical” or “sustainable” doesn’t inherently hold any value. Many people or entities agree on the importance of acting ethically, but their baseline for what the core concepts actually mean, what they are talking about when they mean “harm” for example can vary wildly.

\n

An ethical framework is a tool to reduce the possibility space - to help determine what choices available should be taken rather than could be taken. It creates a heuristic for determining which actions will cultivate an environment that supports a set of desired values. Determining ethical considerations becomes a design problem — what values do we want to see in the world? What outcomes do we want and why? Who are we considering, who are we not considering? What are the edges and the limitations beyond which we decide not to concern ourselves?

\n

There are many ethics available to work from, and conflating a single ethic with the entire range of possible ethics can be a problem. If we assume that a utilitarian, individualist ethic is the only possible ethic we can work with (cough cough economics cough cough) it will necessarily preclude a whole range of outcomes that just aren’t possible to arrive at from those starting conditions. Another ethic based on compassion, animism, and collectivism would start with wildly different axioms and naturally arrive at different outcomes. This debate and tension is at play today among the people who study and try and outline what a sustainable future of forest management is - it’s an ongoing conversation (argument?) between people who don’t share the same basic axioms around what they are trying to talk about.

\n

Perhaps the core of any ethic is “harm reduction”. We can see this in the Hippocratic oath and the medical ethic — first do no harm. The issue is that “harm” is not a natural, discoverable property of the natural world. Harm is necessarily framed as a value judgment, the corollary to the idea of "good in the expanded field". We can outline the structural requirements and definitions of harm, and use those value judgments to identify harm in the world, but it’s still a human judgement system being used to sort and order the world. The medical world to illustrate this, we can look at the contemporary prerogative of “informed consent”. We now define lack agency over medical decisions as a harm done to an individual. This has not always been the case.

\n

This starts to get at the ecological connections between the craft and the business and web development - we can easily frame accessibility as a fundamental requirement of an ethical web development practice in that it reduces the harm of excluding individuals from our work. But we can arrive at that principle from the logic of many different ethics, some of which could be wildly contradictory. From a utilitarian, efficiency and profit maximizing ethic, creating accessible web apps is ethical because not doing so would be to leave money on the table. From a humanistic and compassionate ethic, it’s accessible because it fosters inclusion and equity. So we see accessibility and a clearly defined ethical practice, but that doesn’t mean that we all agree on why accessibility is ethical.

\n

I think in order for us to define what we mean when we try and define how our industry can be ethical, we need to work through a few steps:

\n
    \n
  1. What values do we want to encourage and foster in the environment?
  2. \n
  3. Where do we draw the distinction between things we are concerned with and things we are not?
  4. \n
  5. How do we determine who is effected by our ethic and who is not?
  6. \n
  7. How do we then define the rules for determine which actions we should take and which actions we should not take?
  8. \n
\n

If, as an industry and as individuals, we can have these conversations then we can start to come to terms with what it means to work every day in a world where we are actively supporting and enriching the worlds first trillionaire. Is an ethical web development one where we must boycott AWS? How do we feel about data centers in general? What about ISPs, what about underwater cables?

\n

Jaya Saxena's recent piece in Eater clearly identifies the problems associated with acting ethically as an individual — or even an industry — while remaining a part of a society that systemically undermines that ethic through structural design.

\n
\n

Building an equitable restaurant, a place where all workers are paid fairly, have benefits, and can work in an anti-discriminatory environment, is going to take a near-undoing of the way most restaurants are run.

\n
\n

Saxena examines some current models for employee-owned and cooperative businesses, and privately owned business that actively choose equality and community over profit. She identifies that is a compromise in these enterprises, and sums up the system issues at hand by concussing that "… when it comes to restaurants, it’s hard to change one thing unless you’re changing everything."

\n

There are systemic forces at work that prevent any individual, or even any small community from truly reaching a place of ethical behavior. This makes me think that there has to be a split between "acting ethically" and "being ethical". We can all act ethically, working our way upstream against the system forces arrayed against us, but that's no guarantee that we will, at the end of the day, be ethical.

\n

The essay identifies one restaurant and farm that solves their ethical crisis by charing $195 per person per meal, and frames that as a choice that consumers get to make. This is striking. We live in a time of unprecedented efficiency, unbelievable abundance, and massive wealth but if a restaurant is called to truly account for its exploitation to charge its true price, it's immediately untenable. This feels like it must be true across many industries – Uber would rather cease service in California than treat it's drivers like employees. What would it take to truly understand what the network of costs, values, debts, and the real price of things?

\n

Would an ethical web development be able to account for that cost and still be able to be a business in our society? During my early courses in fine art printmaking at University, I was taught that a blank sheet of paper had value on its own. Not only the price attached to it (steep, for nice paper) but also the work and craft that went in to making it. One had to be sure that the image we we're impressing on the blank sheet of paper added value to it that than reduced it. Our work had to be more valuable than the paper, and if it wasn't we didn't make it.

\n

Ingrid Burrington has written extensively about the physical realities of the internet, and what it means to turn the raw stuff of the earth into the objects we need to make computers. She's even turned computers back into raw stuff. It's hard to confront the reality of an open pit lithium mine and conclude that needs must for better batteries.

\n

Can web development be ethical? Maybe not. But that doesn't mean that we don't have an obligation to act ethically. If we can articulate the ethic we want to have in our industry, and stay internally consistent to those principles in an effort to manifest values we want in the world, maybe that's enough. Or a start anyway.

\n"},{"meta":{"title":"Fun With JSON-LD","slug":"fun-with-json-ld","description":"

Learning about JSON-LD is all about and why we should.

\n","date":"2020.08.25","collection":"texts","timestamp":1598338800000},"content":"

Working with Adam Riemer on SmugMug's SEO has been a really illuminating experience. SEO consulting has always been flagged in my mind as "Snake Oil Business", but Adam really is the best in the field. Almost all of his SEO suggestions focus on performance and accessibility, and he has some clear, hard metrics to define "good". This squares with my fundamental understanding of good SEO practices, and has broadened my horizons and understanding of of the practice.

\n

Something that Adam introduced me to is JSON-LD – a way of creating structured metadata for pages that's more explicit that microdata formats. Here's what I've learned about JSON-LD so far.

\n

JSON-LD is Google's preferred format for accurately and succinctly structuring metadata for pages. This gives them insight into what's on your page and why, and they use The Algorithm to interact and consume this data. Using their standards gives you the opportunity to get top, fancy search results but there's no guarantee of that. The best thing to do is to use your structured data to give the best, more accurate, and complete picture of what content your page has for your audience. Trying to game SEO here is probably going to backfire, just describe things as they are as clearly as possible.

\n

The primary purpose of structured data is to create a machine-readable and algorithm friendly metadata for your content. This allows the content to be consumed by the crawlers and the robots, and join in the mesh of content that Google exposes to users when they perform searches or ask questions of it.

\n

Clearly this is a double-edged proposition. By using structured data you're explicitly buying in to the ecosystem that Google is creating, and allowing your content to be trawled and used and understood however they want. You undoubtable end up providing value to Google in excess to what they are providing to you. Not to mention participating in the project of making the world machine-readable, which has it's own philosophical freight.

\n

Schema.org has a lot of data types that might be appropriate for your project: Articles, Books, Breadcrumbs, Carousel, Course, Critic Review, Dataset, Event, How-to, Local Business, Movie, Podcast, Product, Software App, and Video are all ones that look interesting to me.

\n

For something like this site, we're using pretty much entirely Website and Article – and connect them with a CollectionPage and a Person who is me! Maybe some of the art will be a CreativeWork.

\n

Some information on these types:

\n\n

Lets work through Google's example of an article, maybe for this article!

\n

Here's the script tag that is home to our structured data:

\n
<script type="application/ld+json">\n…\n</script>\n
\n

We fill it with a JSON object that describes our data structure:

\n
{\n  "@context": "https://schema.org",\n  "@type": "Article",\n  "headline": "Article headline",\n  "datePublished": "2020-08-25T16:42:53.786Z",\n  "dateModified": "2020-08-25T16:42:53.786Z"\n}\n
\n

The @context key clues the robot in to the data definition we're going to be using, which is the schema.org definitions. The @type tag associates the following data with the pre-defined structure. From there on it's relevant data! headline, datePublished and dateModified are all directly pulled from the content itself. In out case our data looks like this:

\n
{\n  "@context": "https://schema.org",\n  "@type": "Article",\n  "headline": "Fun With JSON-LD",\n  "datePublished": "2020-08-12T08:00:00+08:00",\n  "dateModified": "2020-08-12T08:00:00+08:00"\n}\n
\n

Open question: BlogPosting or Article? Im going to stick with BlogPosting since these texts are really just that. I would use Article if I was writing a news piece or a review, or something maybe more scholarly.

\n

The last required field is an image:

\n
\n

For best results, provide multiple high-resolution images (minimum of 300,000 pixels when multiplying width and height) with the following aspect ratios: 16x9, 4x3, and 1x1.

\n
\n
{\n\t…\n  "image": [\n    "https://example.com/photos/1x1/photo.jpg",\n    "https://example.com/photos/4x3/photo.jpg",\n    "https://example.com/photos/16x9/photo.jpg"\n  ]\n}\n
\n

This means that creating thumbnails for every Article is important, and those images need to exist on the page in a way that user can see.

\n

For this site, the main use of these images is going to be for sharing thumbnails. The fact that the image needs to be on the pages is interesting, since that really influences the design of the page. I've found that creating the necessity for a prominent thumbnail or hero image that accompanies each article is a recipe for a) not writing articles and b) bland stock photography. I want to avoid both. That means for this site I'm going to do illustrated images, small sketches and motif explorations that may or may not illustrate the article, and attach it to the bottom of the article.

\n

There are two other sections I want to look at, even though they are not requirements according to Google. These are the author and the publisher fields. The goal of using these fields is to create an association between you and your work; or in the case of the publisher field between an imprint entity and the creative works they've published. In our use case for this site, my goal is to create a machine-readable entity that is 'Nikolas Wise' and attach my articles and my work to that, in order to create a coherent entity that is exposed to the broader web.

\n

The author field is a Person or an Organization, the publisher field is an Organization. Lets start with Person:

\n
\n

A person (alive, dead, undead, or fictional).\nhttps://schema.org/Person

\n
\n

It gets added to our LSON-LD like this:

\n
{\n\t…\n  "author": {\n\t  "@type": "Person",\n\t  …\n  }\n}\n
\n

There are a lot of properties in this schema, like deathPlace and knows. One could really get into this and make it a very robust and complete data object, but I'm not sure how much value that would bring at the end of the day. There's a fine line between following specs and best practices to achieve a goal and ticking boxes to structure our lives solely in order to make them legible to the algorithm. I guess we each decide where that line is for ourselves.

\n

For me, I'm going to stick with name, url, image, jobTitle, knowsLanguage, and sameAs. Although publishingPrinciples seems interesting, and I might write one of those.

\n

Most of the fields are simple text strings, and can get filled out like so:

\n
{\n\t…\n  "author": {\n\t  "@type": "Person",\n\t\t"name": "Nikolas Wise",\n\t\t"url": "https://nikolas.ws",\n\t\t"image": "https://photos.smugmug.com/Portraits/i-ThnJCF5/0/f9013fdc/X4/wise-X4.jpg",\n\t\t"jobTitle": "Web Developer",\n\t\t"knowsLanguage": "en, fr",\n\t\t"sameAs": …,\n  }\n}\n
\n

The language codes are from the language code spec, and could also be language schema objects. The job title could be a Defined Term schema object.

\n

The sameAs key is an interesting one, it's either a URL or an array of URLs that connect this @person with other parts of the web that are also that @person.

\n
{\n\t…\n  "@person": {\n\t  …\n\t\t"sameAs": [\n\t\t\t"https://twitter.com/nikolaswise",\n\t\t\t"https://github.com/nikolaswise",\n\t\t\t"https://www.instagram.com/nikolaswise/",\n\t\t\t"https://www.linkedin.com/in/nikolas-wise-6b170265/",\n\t\t],\n  }\n}\n
\n

This will connect "me" with this site and my twitter, github, instagram, and linkedin profiles. Those are the pages that I want to the algorithm to associate with "me".

\n

@organization is similar to @person in a lot of ways, and the fundamental idea is the same. The goal is to create a single entity that the algorithm can connect disparate pages and items too. I'm not going to set of an @organization here, but the the @organization schema type has the spec for the object.

\n

So that's it! That means the entire JSON-LD for this article – and therefor the rest of the texts as well, looks like this:

\n
<script type="application/ld+json">\n\t{\n\t  "@context": "https://schema.org",\n\t  "@type": "Article",\n\t  "headline": "Article headline",\n\t  "datePublished": "2020-08-25T16:42:53.786Z",\n\t  "dateModified": "2020-08-25T16:42:53.786Z",\n\t  "image": [\n\t    "https://example.com/photos/1x1/photo.jpg",\n\t    "https://example.com/photos/4x3/photo.jpg",\n\t    "https://example.com/photos/16x9/photo.jpg"\n\t  ],\n\t  "author": {\n\t\t  "@type": "Person",\n\t\t\t"name": "Nikolas Wise",\n\t\t\t"url": "https://nikolas.ws",\n\t\t\t"image": "https://photos.smugmug.com/Portraits/i-ThnJCF5/0/f9013fdc/X4/wise-X4.jpg",\n\t\t\t"jobTitle": "Web Developer",\n\t\t\t"knowsLanguage": "en, fr",\n\t\t\t"sameAs": [\n\t\t\t\t"https://twitter.com/nikolaswise",\n\t\t\t\t"https://github.com/nikolaswise",\n\t\t\t\t"https://www.instagram.com/nikolaswise/",\n\t\t\t\t"https://www.linkedin.com/in/nikolas-wise-6b170265/",\n\t\t\t],\n\t  }\n\t}\n</script>\n
\n"},{"meta":{"title":"Pressing Words, With Your Friend, Wordpress","slug":"wordpress-but-not-terrible","date":"2018.10.24","description":"

A contemporary developers guide to building things on Wordpress 4.x and not having it be terrible.

\n","collection":"texts","timestamp":1540364400000},"content":"

TL:DR; Start here. Install this thing and connect it to your account on here. Buy a license of this (it's worth it). Read some docs for this and start building. Wordpress 5 and Gutenberg will probably break all of this except the environments.

\n

When I first started working as a developer, Wordpress was the prevalent platform for pretty much any project. Ten years later and … Wordpress is still pretty much most of the internet. In general, Wordpress will be my last choice of a platform. I prefer to build static sites, use a headless CMS, or almost anything else at all.

\n

That said, as the Technical Director at Fuzzco — a design studio that relies almost exclusively on Wordpress for their websites — Wordpress was happening. Fuzzco is rare among studios in that we manage and host projects for our clients, and often have maintenance riders that can last for years. This means that in the course of a year, not only did we build a half dozen new projects on Wordpress, but we maintained and triaged issues on over 100 legacy projects.

\n

Very quickly I realized I had one option: make Wordpress not terrible.

\n

Terrible is pretty harsh

\n

If you're comfortable with Wordpress, you might find some fightin' words here. What's my problem with Wordpress and what am I trying to solve for? My biggest issue with Wordpress development as I've encountered it in the past is a lack of clarity around the requirements of the entire system. What does the project need to run in an environment, and why? How do we move from a repository to a local environment and start working on a codebase? How does that codebase get deployed to a server?

\n

I've seen Wordpress systems that are frozen in time in 2006 — FTP to the server and edit a CSS file on production, or "deploy" your theme by uploading a .zip. I'm interested in how we can lower the cognitive overhead for getting a Wordpress project up and running, and join in with pre-processing, compiling, containerizing, testing, and all the really excellent things that we've come to expect from our web stacks over the past few years.

\n

Another issue I have with Wordpress is its commitment to auto-magical routes and rendering templates with obscure and complicated .php patterns that basically concatenate strings. I'm interested in explicit routes — either hard-coded or parameterized — and separating concerns between logic and template.

\n

A lot of this boils down to a disagreement between what Wordpress thinks a site should be and what I end up using it for. Wordpress as designed distinguishes between your "site" and your "theme". Your "site" is the content in the database, the options you've saved, and the menus and widgets you've installed. It expects "themes" to be presentations of this real website stuff. This model of websites perpetuates that "design" is something that can be applied over a website, a kind of dressing up of the real things. This is the inverse, and perhaps a corollary to, the concept that designing a website is just deciding what it looks like. It's an idea that lives within the system of silos between design and development, and that we can "design" a website in Photoshop or Sketch and hand off the comps to a developer to build it. Which is how a lot of Wordpress projects are built.

\n

In short, I disagree with this concept of websites. My position is that designing a website is both how it looks, how it works, and how the data and structures are composed. Taking this approach, controlling the object models, the information architectures, and the templates are all of equal importance. In my line of work, a Wordpress theme can not be applied to any other site than the one that it was designed for, a site where the structure was designed for the theme.

\n

So why use Wordpress?

\n

There are still a number of really good, compelling reasons to use Wordpress as a platform for building websites. It's got a robust built-in commenting system with user accounts. It's really good for things that are shaped like blogs. It's got a huge, well-maintained ecosystem of plugins. It's free. And since it's most of the Internet, clients are really, really comfortable with it.

\n

There are a couple of reasons not to use Wordpress right now. Mostly these center around the impending release of Wordpress 5.0 and the Gutenberg editor, which has a number of concerns around plugin compatibility and accessibility for authors.

\n

But that's okay, since we've decided to use Wordpress 4.x. As we all know, picking a version of Wordpress and then never upgrading it is one of the time honored traditions of Wordpress development.

\n

How does this work even

\n

Let's start at the end.

\n

We're going to be hosting our production Wordpress site on a Digital Ocean droplet — the smallest one they have — for $5 per month. Depending on the project lifecycle, we can set up more droplets for a staging server and a development server. At Fuzzco we used dev servers to show sites to the internal team, staging servers to show sites to the client, and production servers to show sites to the public.

\n

I don't know about you, but I personally don't super love managing my virtual private servers manually. In order to deploy our codebases to Digital Ocean we'll use the phenomenal tool Nanobox. Nanobox is an operations layer that handles containerizing applications and deploying them agnostically to a cloud service provider. Nanobox will deploy our code from the command line to any one of our droplets.

\n

Nanobox will also containerize and run an application in a virtual machine locally. This means we'll use it to run our development environment, and ensure that all of our environments are identical. No more worrying about PHP versions and extensions and plugins. No more running MAMP or MySQL or Apache or whatever on your local machine before anything works. Nanobox defines the server in a .yaml file, and it will always be the same. It also handles all the syncing between our local disk and our virtual environment.

\n

So now that we know how our code is going from local to production, we can think for a second about how it's going to do that, and how we're going to manage our data.

\n

The database on the production server is "canonical". That means that the database the client interacts with is the one true database, and we must treat it with care and attention. We'll never change that database ourselves, and we'll move that database downstream from production to staging to dev to local in order to develop against our real data. Importantly, we don't want to migrate the database manually either. It's a little expensive but using Migrate DB Pro is an incredible resource for this part. I guess one could also look into alternatives for personal projects.

\n

The canonical codebase lives in version control, and moves the other direction. From Github to local to dev to staging to production, amen. The only things we need to track in version control are what makes our project unique. Practically, this means we need to track our theme and our plugins. Wordpress core files are not special, and we should not fill our repositories with them.

\n

Getting started

\n

At this point it's worth getting started with Nanobox. I back the containers with VirtualBox, since at the time I started this it was slightly more stable than Docker on MacOS High Sierra. Once Nanobox & Virtualbox/Docker is installed, set up Digital Ocean as your provider. Once that's done, we have everything we need to get started!

\n

I'll be talking through a project I built in order to facilitate building other projects. This will be more intense than you might need for a single build, but this was designed a tool that anyone can use to get started quickly. Here's the basic structure of our repo:

\n
📁 /project-name\n⮑ 📄 .gitignore    # includes /wp\n⮑ 📄 package.json  # tooling lives here\n⮑ 📄 readme.md     # be nice, write docs    \n⮑ 📁 theme         # our theme codebase\n⮑ 📁 plugins       # vendor plugins\n⮑ 📁 scripts       # some helpers\n
\n

The crux of the project is our boxfile.yml configuration file. This is what Nanobox uses to create our containers. It looks like this!

\n
# /boxfile.yml                \nrun.config:                    # \n  engine: php                  #\n  engine.config:               #\n    runtime: php-7.0           # Defines PHP version\n    document_root: 'wp/'       # Dir to serve app from\n    extensions:                # PHP extensions we need\n      - gd                     #\n      - mysqli                 #\n      - curl                   #\n      - zlib                   #\n      - ctype                  #\n                               #\nweb.wp:                        #\n  start: php-server            #\n  network_dirs:                #\n    data.storage:              #\n      - wp/wp-content/uploads/ #\ndata.db:                       #\n  image: nanobox/mysql:5.6     # Nanobox DB magic\n                               #\ndata.storage:                  #\n  image: nanobox/unfs:0.9      #\n
\n

As noted above, we'll be serving our entire installation of Wordpress in the /wp directory. This will hold all the Wordpress core files and compiled theme code, none of of which we need or want in version control. As such, make sure this is listed alongside node_modules in the .gitignore.

\n

Since we've decided that we don't want to track these files, but we need them to actually have a project, we can write a helper script to take care of the gap between those two ideas.

\n

Here are the scripts we're going to write to help us handle this process:

\n
📁 /project-name\n⮑ 📁 scripts\n   ⮑ 📄 check-install.sh # Installs Wordpress core files.\n   ⮑ 📄 init.sh          # Runs our setup helper.\n   ⮑ 📄 prestart.sh      # Checks if we need to init.\n   ⮑ 📄 setup.js         # Cute lil' CLI helper.\n
\n

The first thing we'll do is write a script that checks if /wp exists. If it doesn't, throw an error that we need to initialize the project since we don't have any of the core files we need.

\n
# prestart.sh\n#!/bin/bash\necho 'Check to make sure wordpress is here at all'\nif test -d ./wp/\nthen\n  echo 'yup we good'\n  exit 0\nelse\n  echo 'Project not initialized: Run `$ npm run init`'\n  exit 1\nfi\n
\n

I'm calling this prestart because I want to run it before npm start. Many times I'll be on autopilot, and after cloning a repo simply run npm install and npm start. This interrupts that process and lets me know I need a third step, npm run init. Let's put this in our package.json scripts:

\n
# package.json\n{\n  ...\n  "scripts": {\n    ...\n    "init": "./scripts/init.sh",\n    "prestart": "./scripts/prestart.sh",\n    "start": "npm run dev"\n  }\n  ...\n}\n
\n

We'll get to our dev tooling later. Lets take a look at what our init.sh script does:

\n
# init.sh\n#!/bin/bash\nnode ./scripts/setup.js  \n
\n

Not much! This just runs our setup CLI helper. You might not need all this, but since I built this system to help a team of developers work on many many projects you're gonna get it anyway.

\n
// setup.js\n\n// some nice deps for making a CLI.\nconst prompt = require('prompt')\nconst exec = require('child_process').exec\nconst colors = require("colors/safe")\n\n// Run and log a bash command\nconst bash = cmd => {\n  msg('green', `Running: ${cmd}`)\n  return new Promise(function(resolve, reject) {\n    exec(cmd, (err, stdout, stderr) => {\n      if (err) reject(err)\n      resolve(stdout, stderr)\n    })\n  });\n}\n\n// Log a message\nconst msg = (color, text) => {\n  console.log(colors[color](text))\n}\n\n// do the magic\nconst setup = (err, result) => {\n  if (err) msg(`red`, err)\n\n  msg('yellow', 'WordPress configuration values ☟')\n\n  for (let key in result) {\n    msg('yellow', `${key}: ${result[key]};`)\n  }\n  // run our check-install script.\n  bash(`${process.cwd()}/scripts/check-install.sh`)\n  .then(ok => {\n    // add our project to hostfile\n    bash(`nanobox dns add local ${result.name}.local`)\n  })\n  .then(ok => {\n    // explain the next step\n    msg('green', `Run npm start, then finish setting up WordPress at ${result.name}.local/wp-admin`)\n  })\n}\n\nmsg('green', 'Making Progress!')\nprompt.start();\nprompt.get({\n  properties: {\n    name: {\n      description: colors.magenta("Project name:")\n    }\n  }\n}, setup);\n
\n

This will open a CLI asking for the name of the project, run the check-install.sh script, create the hostfile line for our local DNS at <project-name>.local, and log the next action that you need to take to finish installing Wordpress.

\n

Lets take a peek at our check-install.sh file:

\n
# check-install.sh\n#!/bin/bash\necho 'Check to make sure wordpress is here at all'\nif test -d ./wp/\nthen\n  echo 'yup we good'\nelse\n  echo 'nope we need that'\n  degit git@github.com:nanobox-quickstarts/nanobox-wordpress.git wp\nfi\nrsync -va --delete ./plugins/ ./wp/wp-content/plugins/\nrsync -va --delete ./theme/ ./wp/wp-content/themes/my-theme\n
\n

Very similar to prestart! The biggest difference is the bit where we use degit to clone Nanobox's official Wordpress repo into our untracked /wp directory. Degit will only get the head files, and none of the git history. Nor will it keep the .git file, basically making this a super clean, super fast way to download a directory of files. It's great. The last thing this does is wipe out any themes or plugins that we don't want our need in the core files and syncs out own tracked directories to the correct places in the Wordpress core file structure.

\n

Now would be a time to talk about plugins.

\n

What's up with plugins?

\n

Wordpress has a million plugins. We're going to focus on some of the basic ones that almost every Wordpress project ever needs, and should honestly be part of Wordpress. Building sites without these is a pain. Here they are:

\n
📁 /project-name\n⮑ 📁 plugins\n  ⮑ 📁 advanced-custom-fields-pro\n  ⮑ 📁 custom-post-types-ui\n  ⮑ 📁 timber-library\n  ⮑ 📁 wp-migrate-db-pro\n
\n

There are a couple more in my repo to do things like order posts in the CMS and import CSVs. Not super necessary, so we won't talk about theme here.

\n

Advanced Custom Fields

\n

ACF is a staple of Wordpress development. It lets us define new key/value pairs to extend the data model of things like posts and pages, and allows us to create a set of global variable available from anywhere. Sounds simple, surprising it's not part of Wordpress.

\n

Custom Post Types UI

\n

CPT-UI creates an interface in the admin panel for creating new post types. Out of the box, Wordpress comes with Posts and Pages. CPT-UI lets us build new types like Projects or Case Studies or whatever need for our data model. Again, surprising that this isn't just part of Wordpress. C'est la vivre.

\n

WP Migrate DB

\n

Migrate DB lets us ... migrate ... our ... DB. This gives us the ability to sync our databases across environments and get media uploads and things without needing to write magic MySQL queries while tunneled into open database ports on virtual machines. This is better. Believe me.

\n

Timber

\n

The Timber library from Upstatement is the greatest thing to happen to Wordpress development, after those plugins that should just be part of Wordpress. Timber introduces the concept of layout templates to Wordpress. This lets us write PHP to manipulate data, and pass that data to a template file where we can write Twig templates rather than composing strings in PHP. Basically ...

\n
<?php echo $myvar ?>\n
\n

Turns in to:

\n

{% raw %}

\n
{{ myvar }}\n
\n

{% endraw %}

\n

This lets us write templates with a templating language, and write server-side business logic in a server-side programming language. Truly revolutionary.

\n

What we talk about when we talk about Wordpress development: or, The Theme.

\n

With all this initial work around Wordpress core, development environments, and a basic plugin ecosystem in place we can start talking about the good stuff: the theme!

\n
📁 /project-name\n⮑ 📁 theme\n   ⮑ 📁 es6              # Source JS\n   ⮑ 📁 scss             # Source SCSS\n   ⮑ 📁 routes           # PHP route logic files\n      ⮑ 📄 index.php\n      ⮑ 📄 page.php\n      ⮑ 📄 post.php\n   ⮑ 📁 views            # Twig templates\n      ⮑ 📁 layouts\n      ⮑ 📁 pages\n      ⮑ 📁 partials\n   ⮑ 📄 functions.php    # This includes routing.\n   ⮑ 📄 screenshot.png   # Theme preview image.\n   ⮑ 📄 index.php        # Need this, but it's empty.¯\\_(ツ)_/¯\n
\n

We won't get too deep into this, since we're getting into more conventional territory here. Basically our es6 directory holds source JS that will get compiled into a bundle. Same with the scss directory, which gets compiled into css. We handle that with npm scripts in the package.json.

\n
# package.json\n{\n  ...\n  "scripts": {\n    ...\n    "css": "node-sass ./theme/scss/style.scss theme/style.css --watch",\n    "js": "rollup -c -w",\n    ...\n  }\n  ...\n}\n
\n

Hopefully none of this is to unusual — if it's is I recommend reading Paul Pederson's excellent article on npm scripts.

\n

There is one part of this I want to touch on before moving on:

\n
# package.json\n{\n  ...\n  "scripts": {\n    ...\n    "sync:plugins": "rsync -va --delete ./plugins/ ./wp/wp-content/plugins/",\n    "sync:theme": "rsync -va --delete ./theme/ ./wp/wp-content/themes/fuzzco",    \n    "watch": "rerun-script",\n    ...\n  },\n  "watches": {\n    "sync:plugins": "plugins/**/*.*",\n    "sync:theme": "theme/**/*.*"\n  },\n  ... \n
\n

This bit sets up a watcher on our theme and plugins directory, which sync our tracked working files to the correct place in our Wordpress core file structure.

\n

Functions, Routes, and Views

\n

The last thing I want to touch on is the basic structure of using Timber to match routes with views.

\n
/** functions.php */\nRoutes::map('/', function($params){\n  Routes::load('routes/page.php', $params, null, 200);\n});\nRoutes::map('/:page', function ($params) {\n  $page = get_page_by_path($params['page']);\n  if ($page) {\n      Routes::load('routes/page.php', $params, null, 200);\n  } else {\n      Routes::load('routes/404.php', $params, null, 404);\n  }\n});\nRoutes::map('/blog/:post', function($params){\n  Routes::load('routes/post.php', $params, null, 200);\n});\n
\n

These are Timber routes defined in the functions.php file. This replaces the standard routing of Wordpress, and we have change the structure of the Wordpress permalinks to anything other than the default to have it work. This is documented in Timber.

\n

When our server gets a request at a route of /page-name, it will call the page.php file and pass it the params associated with the route.

\n
/** page.php */\n<?php\n  $context = Timber::get_context();\n  $post = new TimberPost();\n  $context['page'] = $post;\n  \n  Timber::render( array(\n    'views/pages/page-' . $post->post_name . '.twig',\n    'views/pages/page.twig'\n  ), $context );\n?>\n
\n

The page.php file assigns some variables, interacts with Wordpress to get and shape our data, and then renders the twig file associated with the page. In this case, it's going to see if there's a template that matches the name of our page, otherwise it will render the default page template.

\n

Back to the beginning

\n

You've built your theme! Maybe it's a simple hello world, maybe it's a heavy duty big ol' thing. Either way, it's time to deploy.

\n

You can use Nanobox to create a droplet for your server. Nanobox will give your project a name in their system, and expose the URL for the server at <your-project>.nanoapp.io. I like to use the convention project-dev, project-stage, and project-prod. Once you create your project in Nanobox, the hard part is over and you can let them do the heavy lifting:

\n
$ nanobox deploy project-dev\n
\n

Or we can map this to our NPM script:

\n
$ npm run deploy:dev  \n
\n

This will containerize our application, push it to our droplet, hydrate the entire thing, and serve! Now we can use Migrate DB to move our database around, and we're in business.

\n

Putting it all together

\n

The project repo is a turnkey, ready to roll version of all the above. It contains all the tooling needed to get started, and if you've followed along with this guide, you should be able to get started in no time.

\n

As always, feel free to reach out to me in your venue of choice to talk about any of this — I would be happy to help you set this up for your own Wordpress project!

\n"},{"meta":{"title":"Soft Proof","slug":"soft-proof","date":"2017.10.29","description":"

Translations and compromises in image making or; the Image Cult Society.

\n","collection":"texts","timestamp":1509260400000},"content":"

There's an interesting thing that happens when a new idea or technology gets introduced then quickly assimilated into the background hum of our daily lives. It starts out with a discreet name — a clear identifier of what this thing is and means. Than this name just sort of ... slips away. It becomes so normal that to name it would seem strange. Its original name doesn't seem to fit any more, as the name existed in the first place to demarcate the new thought from the ordinary. And now the new thing is just ordinary. Think about Google Maps. It's just ... a map. In 2005, when Google Maps was first released, it's particular approach to the interface of a digital map was called a 'slippy map'. Weird, right?

\n

This is an interesting phenomenon around cultural approaches to technology, but not actually what I want to talk about. I want to talk about soft proofs. Soft proofs are an example of this taken to an extreme — you use them every day but you have probably never heard of them. There is no need for the soft proof to be something other than normal, the soft proof just is normal. But what is a soft proof, and why is it so normal? And why do I want to explore a topic so quotidian that the word used to mark it as interesting is so faded and worn?

\n

A soft proof is a way of viewing an image before the image has been reproduced mechanically. In contrast to the soft proof is the hard proof: a way of viewing an image immediately after it's been reproduced mechanically. Basically, a soft proof is an image on a screen that will be sent to a printer. Otherwise known as an image. It's need for a discreet name seems so unnecessary that it seems bizarre to refer to all images - even this text as I write it – as soft proofs. But that is, in essence, what they are. We see images on our screens that an be reliably turned into images on other peoples screens, and even into physical images on paper.

\n

The reason why this needed a name to demarcate it as special — during the advent of the digital — is that this is a really hard problem to solve. There are a range of mathematical models for approaching a relatively unified theory of color and vision, and a wide range of physical pieces of machinery that are tasked with producing those images — from printing presses to monitors. The act of ensuring an image can be predictably reproduced is necessarily an act of translation. Translating from this color space to that; from an additive color model of a screen to the subtractive color model of ink and paper; approximating the color of a paper stock to be printed on.

\n

This translating process is done using something called a Color Profile. A Color Profile is a set of rules for ensuring that an image created with red, green, and blue light can be replicated on off-white paper using cyan, magenta, yellow, orange, and green inks. The current workflow of digital to print is so smooth, so ubiquitous and mundane, as to occlude the massive technological feat that it supports it.

\n

This feat was undertaken by a small group of technology companies I the early 90s, and they collaborated to define a universal standard of how this would work.

\n
\n

The International Color Consortium was formed in 1993 by eight industry vendors in order to create an open, vendor-neutral color management system which would function transparently across all operating systems and software packages. . . . The eight founding members of the ICC were Adobe, Agfa, Apple, Kodak, Microsoft, Silicon Graphics, Sun Microsystems, and Taligent.

\n
\n

- Color Science History and the ICC Profile Specifications, Elle Stone

\n

The current, baseline profiles built around RGB and CMYK came about with the rise of digital image-making, which is the basis of the the current world around us, a world built on and predicated by images.

\n

The dominant translation is dominant because it — to a large degree — works. Creating color profile is really hard, mathy, phsysicsy stuff. It's hard to do yourself. But CMYK/RGB cuts the corners off the world to make it fit into a gamut that can be handled. But by necessity it's a compromise: what colors are we not handling in order to handle the maximum number of colors? What parts of the color space get left behind?

\n
\n

Many of these issues give me the feeling at times of reluctant rather than open co-operation between some of the companies that created this standard. Having said that, there does seem enough information in the public standard (when combined with examining available existing profiles) to effectively and accurately characterize color profiles of devices and color spaces. I could imagine there being some poor results at times though, due to some looseness in the spec.

\n
\n

— What's wrong with the ICC profile format anyway?, Graeme Gill

\n

Its important to understand this compromise; understand how it works and what exchange we're making in the process. What are we giving up, and what are we getting in return?

\n

What are we leaving on the table? For example, photography (up until the 80's) calibrated for white people. African Americans and other dark skinned people photographed poorly. They were outside the color space. The story goes that school photos of interracial classrooms would have rows of perfectly exposed white kids, and voids where the black children should have been (Adamn Broomberg). More than this, the standards only changed with industry pressure from chocolate and furniture manufacturers — a realm of capital where the browns and blacks matter (Rosie Cima).

\n
\n

Film chemistry, photo lab procedures, video screen colour balancing practices, and digital cameras in general were originally developed with a global assumption of ‘Whiteness.’”

\n
\n

— Looking at Shirley, the Ultimate Norm: Colour Balance, Image Technologies, and Cognitive Equity, Lorna Roth

\n

With the creation of the RGB color space, with the creation of the ICC, we ceded the visual world to supermassive tech interests, much like we've ceded our privacy and personal data. In doing so we've inherently made the creation and dissemination of images into a tool for capital — one that supports dominant power structures.

\n

How do we understand the implicit, invisible, baked in assumptions of the soft proof? We can start by operating outside the parameters of the soft proof, recognize it as a tool to use or not use. The gap between the soft proof & the hard copy is measured in the gap between the tools used to plan & prepare versus the tools used to produce, and we can move in to this gap and inhabit it. We can create work here, and in doing so reclaim some of the space that we've given away.

\n

The Risograph, for example, has a toolchain for soft proofing, but the machine— through its high speed & low cost — also opens up the possibility of designing images through iterative hard proofs; blending the techniques go the modern digital print process with the classical analog ones.

\n

The web is a strange medium – a blending of soft and hard spaces. A plastique space, with plastic proofs and plastic copies. The same process of translation is at work — between the still & the interactive, flowing image. This is why showing comps & wireframes of websites to clients can be so tricky: our culture of image cult and technological process can elide the critical differences we sense as agents of these systems.

\n

This is a call for a Marxism of image making — to seize the means of production. To create radical images & tools that exist in the corners of the gamut and color spaces discarded by the soft proof. To understand that planning is not doing, and take control of our own visual languages.

\n"},{"meta":{"title":"How to Design While Developing","slug":"how-to-design-while-developing","date":"2016.5.15","description":"

Moving beyond the idea that the designer and the developer on a web project are different people, and that somehow those are different things.

\n","collection":"texts","timestamp":1463295600000},"content":"

For a long time, websites got made with one person designing how it should look and one person developing the code that made it look that way.

\n

A lot of times, this is still how things get done - one team makes static Photoshop comps, and hands them off to a team of developers who know stuff like whether React or Ember or Node or Ruby is the best thing. This can sometimes cause friction. The designer expects the website to look exactly like the comp, the developer writes a bunch of custom CSS and HTML to fit the design, and whoever needs to make sure the whole thing is WCAAG compliant spends weeks hating both of them. When the next comp comes down the line, it all happens again. For a big site, this leads to design drift, and a hugely tangled codebase that’s a nightmare to try and untangle.

\n

This splitting of systems is an artificial one that’s sustained by organizational assumptions: we need designers, and we need developers. The thing about Design though - capital D design - is that it’s simply a method of deciding on a structure to accomplish a purpose. The design tools and methods one uses to accomplish good design is always in tune with the thing being made. A building doesn’t go from painting to construction drawings, nor does a car go from modeling clay to racetrack. The clay and the paint are very useful steps to start the process of design. They help us to be creative and loose and explore new solutions to the problems at hand. This exploratory work helps us to understand how a thing will feel in the world.

\n

These initial drawings and sketches get translated into their final structures, and translation is a process that can enrich both what is being translated to as well as what is being translated from. Every precise moment may not align directly, since differences in context can have deep implications for meaning.

\n

We turn our drawings into objects with all the considerations of the final materiality present. We can’t ignore the shape of the engine or structural code requirements, although our models and paintings certainly can. Design is provisional until the point at which it exists in the world, and when talking about the web even this isn’t any sort of ending. A website is its own sort of thing, with its own structure and requirements that need to be present and known throughout the entire process of design.

\n

A drawing and a website will never look the same. This is mostly because a website isn't static. Because of this, what a website looks like is just a small piece of what a website is. A website is what it enables its users to accomplish, what its developers have to do to keep it stable and moving forward.

\n

The best way to meet our goals as people who make good websites is to focus less on those drawings right away. Instead we should think more about simplicity and elegance than the detail oriented perfection of a jpeg.

\n

Don’t get me wrong - the jpeg can be important. It shows us how we think we can solve our problems, a glimpse of how we want our website to looks and feel, and the tone we want to communicate. Through all this it needs to have to room to change and breath as it comes to life and become a real thing. Our drawings should not be on the level of “what does this look like” but “what problem does this solve and how”. At every step in the process we can work to make the real thing better, to solve new questions that arise as we move through the process of design / development – a process where there is no gap between those two ideas.

\n

To design a website is to develop it - and as we develop a website we are constantly making design decisions. A designer cannot abdicate their responsibility to design by saying “well, my jpeg looked great.” A developer cannot abdicate their responsibility to a codebase by saying “well that’s what they wanted in the comp.”

\n"},{"meta":{"title":"Building a Client Library for ArcGIS","slug":"building-a-client-library","date":"2015.3.09","description":"

Writing a wrapper client library to smooth out design weirdness at the API level leads to plenty of design thinking on the way things should be.

\n","collection":"texts","timestamp":1425884400000},"content":"

This year I built a JavaScript wrapper for Node and the Browser around the ArcGIS REST API to simplify working with that platform as a developer. This was as an exercise in API design, as well as making a tool that I wanted to use but didn't exist yet. The project is a bare-bones library to ease interactions with the ArcGIS REST API in JavaScript and Node apps.

\n

Sometimes – and for sure in this case – an API can be rough, built over time, and not provide the sort of logical models that work well with specific language environments. This was the case with the ArcGIS REST API that I was running in to. A lot of the decisions had been made over the course of years, and didn't translate very smoothly a language as young as Node.js.

\n

The first step was to figure out what problems I wanted to solve. A lot of my work with Esri PDX has been about content handling, and so this is where I started. Reading all the doc to get a big picture of what's going on with the API, and talk to everyone who's done work like this before to figure out what problems they needed to solve was the first step. From there I felt I had enough context and information to make the thing useful for more people than just me, and make sure that it was coherent with the underlying goals of the original API.

\n

This project works to simplify and unify the gap between the ArcGIS REST API and a contemporary Node application. This library is a UI in the most basic sense of the term — it provides an interface between the developer and the servers. That interface needs to be well designed and thoughtful in order to make the process as smooth, intuitive, and pleasurable as possible.

\n

One of the most important parts of the project is to provide developers with a way to access the ArcGIS platform without needing to architect their entire application around opinionated frameworks (like Dojo, for example). Though the library itself is written in ES6, it's distributed as plain, normal ES5 – both as a node package and a packaged bundle. This means it works both in Node and the browser, and has very few opinions on how it integrates with the rest of your app.

\n

Right now, the library wraps most of the basic platform content management and interactions - getting and editing users, organizations, and items. The Node ArcGIS Client Library is an open source project — so it's scope will increase as the community works to accomplish more goals and workflows.

\n

Setting up the client

\n

The first step in using the library is initializing the client with your target portal.

\n
var ArcGIS = require('arcgis')\nvar arcgis = ArcGIS()\n
\n

This sets up a default object for interacting with the API. This default is going to talk to ArcGIS Online as an anonymous, unauthenticated user. One can authenticate this client session as a named user by passing in a user token obtained from a successful OAuth login process.

\n
var arcgis = Arcgis({\n  token: namedUserToken\n})\n
\n

This isn't exclusive to ArcGIS Online. The API for interacting with your organization's installation of Portal or Server is the same. Setting up the client session with your instance is done by specifying your API domain.

\n
var arcgis = Arcgis({\n  domain: 'ago.my-server.com',\n  token: namedUserToken\n})\n
\n

Beyond the initialization of the client, the library is exclusively async. All the functions return promises by default.

\n
function log (m) {\n  console.log(m)\n}\nfunction ohNo (err) {\n  return new Error(err)\n}\narcgis.request()\n.then(log)\n.catch(ohNo)\n
\n

You can also pass in a node-style callback, if you'd prefer.

\n
function log (err, results) {\n  if (err) {\n    return new Error(err)\n  } else {\n    console.log(results)\n  }\n}\narcgis.request({}, log)\n
\n

Both methods work just as well, and use all the same business logic. I like promises, but maybe you don't. This is one of the main reasons the library does its best to avoid inflicting my opinions on your codebase.

\n

Once we have an authenticated session, we can do all sorts of stuff — like figure out who we are:

\n
function hello (user) {\n  console.log('Hello, ' + user.firstName)\n}\narcgis.user('NikolasWise').then(hello)\n
\n

We can get all of the items that user has in the platform:

\n
function getContent (user) {\n  return user.content()\n}\nfunction logContent (content) {\n  console.log(content)\n}\narcgis.user('NikolasWise')\n.then(getContent)\n.then(logContent)\n
\n

Or a list of all the groups that a user is a member of.

\n
function logGroups (item) {\n  item.groups.forEach(function (group) {\n    console.log(group.title)\n  })\n}\narcgis.user('NikolasWise').then(logGroups)\n
\n

The library also can interact with the user's organization, returning information, members, or all the content associated with the organization.

\n
function logOrg (org) {\n  console.log(org)\n}\narcgis.organization('esripdx').then(logOrg)\n
\n

The organization call defaults to 'self' — whatever organization the current user is a member of.

\n
function getMembers (org) {\n  return org.members()\n}\nfunction log (members) {\n  console.log(members)\n}\narcgis.organization().then(getMembers).then(log)\n
\n

Many of the content calls are abstractions or helper methods for longer, more complicated calls to the search endpoint.

\n
function getContent (org) {\n  return org.content()\n}\nfunction log (items) {\n  console.log(items)\n}\narcgis.organization().then(getContent).then(log)\n
\n

In this way we are able to create a transitional layer between the API itself (a super complicated call to the search endpoint) and what application developers need (all my organization's content).

\n

Working with content

\n

Platform content is the weakest link in the library as of today. ArcGIS supports a huge range of item types, and quite a number of sophisticated things you can do with those item types. For now the basics are more or less in place — like getting an item's details:

\n
var layerID = 'a5e5e5ac3cfc44dfa8e90b92cd7289fb'\nfunction logItem (item) {\n  console.log(item)\n}\narcgis.item(layerID).then(logItem)\n
\n

Or updating the those details and editing the permissions:

\n
var layerId = 'a5e5e5ac3cfc44dfa8e90b92cd7289fb'\nfunction updateItem (item) {\n  return item.update({\n    snippet: 'Building footprints in my neighborhood in Portland, Oregon'\n  })\n}\nfunction shareItem (item) {\n  console.log(item)\n  return item.permissions({\n    access: 'public'\n  })\n}\narcgis.item(layerId)\n.then(updateItem)\n.then(shareItem)\n
\n

So far, there's some support for item-type-specific methods that are starting to open up the possibilities of manipulating your content from Node — like getting all the data in a layer.

\n
var layerID = 'a5e5e5ac3cfc44dfa8e90b92cd7289fb'\nfunction getData (item) {\n  return item.data()\n}\nfunction logData (data) {\n  console.log(data)\n}\narcgis.item(layerId)\n.then(getData)\n.then(logData)\n
\n

There is a lot more of the platform that we could cover than this - services, analysis, layer creation and tile publishing all are actions that this library or ones like it could cover.

\n"},{"meta":{"title":"Map as Context","slug":"map-as-context","date":"2015.2.19","description":"

Understanding maps as designed objects and attempting to define a theory for making digital maps on the internet as good as old paper maps.

\n","collection":"texts","timestamp":1424332800000},"content":"

Looking at maps as they exist today on the internet, we have a pretty solid idea of what that means. It means they look like Google Maps. This is a pretty recent design solution to the 'what is a map on the internet' problem, only about 10 years old. Which is old for internet, but pretty young for maps. The Google Maps model is a good one, too! It's a very effective way to present what is essentially a road map - a drivers atlas for navigating a city or a country. Google Maps replaces the AAA State Highway map really effectively, but perhaps there are some weak links with how it applies to other, less navigation-oriented maps.

\n

\n

\n \n \n \n
\n

\n

There a large number of really beautiful maps that exist only on paper, and a large number of really ugly maps that exist on screens. How can we start to think about maps in a way that bridges this gap? Is there a way we can approach these other, not-a-roadmap-map maps more effectively to make them as good as their paper-bound cousins?

\n

To approach this question from an angle, it's worth taking a moment to think about what a map is. The map is a miniature that pivots around the body to represent the gigantic enormity of the physical world. The map shrinks the world down to a place it can be held in the hands and entirely seen with the eye. The map connects the vastness of reality to the body in way that can be handled - both physically and mentally.

\n

This creates tension with maps on the screen - especially the internet. The screen can not be touched, and the internet can not be related to the body. Phones and tablets mitigate this by bringing the screen closer, and moving to the size of the hand, but the core difficulty remains – if a map exists to scale down the enormity of the world to the size of the body, the internet itself has no boundaries or edges, and way to relate the screen to the body.

\n

Why is this connection between world and body important? The map provides context for understanding world-scale systems and landscape-scale concepts in a human-scale object. The map is a typology of communication that sits half way between the book and the visual art object. Both the book and the painting – or the print – are techniques that are used to provide access to concepts and ideas beyond the scale of a single individual. The book can contain centuries of intellectual thought, the painting can expose feelings and emotions that touch any number of people. If the map exists between these two mode of communication, that means that it's goal is to use a volume of thought, of data, of measurements to expose a broad, underlying concept. This can be an environmental truth (or the supposition of one) or a societal insight.

\n

The map does this through a very specific set of visual design tools with formal qualities that lend themselves to the problem at hand. These formalities are partly defined and structured by the technologies behind the production and distribution of the map.

\n

The first maps where hand drawn, and correspondingly have attributes of other hand-made visual works. With the advent of printing, maps started to be carved into wood, and duplicated. After wood, they came to etching, and after that lithography. In each of the print techniques, certain marks are favored and made possible through the medium of the matrix itself. Shared in all the print techniques, however, is the concept of plates – individual drawn layers for different colors. Equating individual plates to individual colors to individual typological concepts being shown on the map is a big reason why printed maps are so good.

\n

The careful and deliberate application of the maps formal characteristics to directly address the ideas and concepts to be communicated, the address the use of the map is what makes a map good.

\n

On the internet, we make maps differently. GIS data sets mean that maps can be made through mathematical and analytic tools – comparing sets of data, and creating new sets of data to answer questions. A robust and open set of public data means that there are map making tools which provide ways to style and combine existing content.

\n

These techniques utilize a relatively static map that purports to usefully describe the entire geography of the planet. All of the maps made are intended to sit within larger application, itself designed to solve a problem.

\n

Most of the time, these maps fail to provide meaningful connection to the concepts provided - they lose the essential aspect of the map that join world-to-human scales, instead operating at the world-to-world level. The endless map of the internet is itself incomprehensible to the body. The maps of the internet are simultaneously too broad and too simple, providing too much and too little. The problem the map is presenting itself as a solution too is usually far too muddy, and the resulting lack of clarity of purpose leads to a map with itself a lack of clarity.

\n

Looking to the main purpose of the map — simplicity, clarity, and the miniaturization of the world to pivot around the human hand — while using the formal visual tools and design lessons of the previous several centuries of paper maps – the balance of simplification and exaggeration, clear conceptual separations, and embracing the limitations of the object to provide focus.

\n

In all, the map must be appropriate and natural for it's intended use, playing its role in the overall purpose of the design solution.

\n"},{"meta":{"title":"Pixels don't matter.","slug":"pixels-dont-matter","date":"2013.10.21","description":"

They don't.

\n","collection":"texts","timestamp":1382338800000},"content":"

Talking about mobile design today, the conversation is couched in terms of "pixel perfect", or designs are made by "pixel pushers". This isn't a useful way to approach layout design.

\n

In the analog world we talk about the three fundamental components of print.

\n

\n

\n \n \n \n
\n

\n

I know what you're thinking right now; "Wait, we're not analog here." We're digital. We need pixels. And that's true, we do need pixels. Let's reframe these principles:

\n

\n

\n \n \n \n
\n

\n

Thinking about our substrate, our materials are our devices, which render with pixels.

\n

But our materials change -

\n

\n

\n \n \n \n
\n

\n

Our devices get more and more pixel density, their proportions change, technology improves, CSS abstracts a 'pixel' of its own, Android exists, and all of a sudden our materials get away from us and focusing on the pixel gives us an aneurysm.

\n

But don't worry, because:

\n

\n

\n \n \n \n
\n

\n

We still have two other layers to work through.

\n

\n

\n \n \n \n
\n \nOur materials don't dictate our aesthetics nor or concept, but materials do inform our decisions. So think about pixels, but put them in the proper place - as the substrate of our work.

\n

Measurement systems are abstract ideas -

\n

\n

\n \n \n \n
\n \nAn inch is not a thing, a foot is not a thing. The pixel is a phsyical thing, and as a physical thing we've seen that it's prone to change.

\n

The pixel is a terrible way to measure things.

\n

\n

\n \n \n \n
\n \nSo use another system to measure, any other system. The ratios, the proportions, the relationships and the hierarchies are important. That's where we do our work as designers.

\n

So don't stress pixels. Spare them a thought when you get started, and know that at some point in the process they'll make themselves apparent, and will need to be addressed. But the pixels will let you know, and when you need to deal with them they'll be there.

\n

Show Time

\n

These are some comps for the MapAttack mobile app.

\n

\n

\n \n \n \n
\n

\n

Every element is specified by exact pixel counts. The typefaces are defined, the margins, the padding, and the border strokes are each precisely determined and labeled.\nDon't get the wrong idea - this isn't my hypocrisy you're seeing. These designs didn't start life here, this is just where they ended up.

\n

Before handing off my design, I basically ran pre-press on it. Our developers do need pixels - they need these values. They need them for execution and production. So we, as their designers, need to give them pixels. But we also need to give them more than just that.

\n

\n

\n \n \n \n
\n

\n

We need to give them the blueprints for how we think about the design. That means showing what's proportional as proportion, showing what's fluid as fluid. This gives our static comps the life they need to work for a huge range of devices.

\n

Love the Pixel, let it be a Pixel.

\n

Letting Pixels be beautiful, blocky, physical little things means using them to render our designs, not dictate them. It means using actual measurement systems to measure. And more importantly, it gives us a way to understand our layout better, to arrive at more considered designs, and to understand our designs as they live on actual hardware in the actual world.

\n"}] \ No newline at end of file +[{"meta":{"title":"The Shape of the Problem","slug":"shape-of-the-problem","date":"2021.06.29","description":"

Ontological data structures, real-time editing, and what a web app really is man.

\n","type":"text","collection":"texts","timestamp":1624950000000},"content":"

At work the other day I was thinking about a problem: Websites as User Generated Content, a part of the business I've been low-key thinking about for nearly a year. A chance conversation with Reuben Son about Vox's content editing tool Kiln altered my perspective on who a user is when thinking about UGC. For them, their users are their editors and authors. Kiln works by loading an editor interface directly over the rendered web page, and allows for editing of any portion of that webpage.

\n

It occurred to me that one could use a real-time NoSQL database like FaunaDB or Firebase to store a document model, run an app that subscribes to changes to this database and renders the document model into the DOM, than do the same bit where a editor lays over the page and allows for editing, posting changes directly to the NoSQL database. The resulting update would re-render the document for the editor, and anyone else also currently viewing the app. This would look like Squarespace, but be naturally multi-tenant. After editing, a static site could be generated and hosted on s3 to serve to a larger audience. Questions around draft states, not publishing another users edits, and other logistical things started to crowd my mind, but the core idea was interesting enough for me to decide to put a prototype together.

\n

Unfortunately (well…) before I could get to it, I was dropped in cold to a meeting about an impending migration of our knowledge base. We have hundreds of articles with URLs that are going to get 100s of new URLs and we have to update them across out entire application, hopefully also solving the problem for good and never having to do it again. Since our larger content strategy was at the forefront of my mind, I pitched creating a Knowledge Organization System that associates a UUID with an article URL, then consume that UUID in applications and not worry about the URL. Front the whole thing with a content management system and our support team can update article URLs whenever and it's never a problem\nagain.

\n

Thats when I realized that these two problems are the same problem. Both have the same set of concerns and desired behaviors a collection of structured data documents that support CRUD operations, paired with a visualization of the. document. So:

\n\n

Rephrased, the components are:

\n
    \n
  1. The Ontology / Structure
  2. \n
  3. The Editor Interface
  4. \n
  5. The Rendered Form
    \na. A Dynamic form
    \nb. A Static form
  6. \n
\n

Note that the Dynamic form is not necessarily a server or client-side rendered experience, but a way of seeing the changes you're making before committing to a production state. The Static form then is not necessarily a static asset (it can be server or client side generated from a database) but denotes a stable production state as viewed by a audience.

\n

For the knowledge base, the ontology is the index of articles. For the UGC websites, it is content and component structure. Thee editor interface is a CMS, either a product or like Kiln, something in house that sits on top. The rendered form for the sites is a static html build, for the knowledge base it's 301 redirect service.

\n

This is super abstract on purpose! I think this is the core structure of all web apps. Since this structure is so general as to be mostly useless (maybe interesting as a teaching tool), the value of a web app typology must come from taking an opinionated perspective on all three of the above points. This can be used as a tool to examine and critique popular web app typologies to validate the concept; how does a given typology express its disposition and behaviors in each component?

\n

Our first typology for analysis is the "Static Site" – an unchanging directory of html files generated at build time from a codebase, largely on a developers local machine. The ontology of many popular static site generates is the local file system, and attendant metadata. The editor interface is the developers command line and the source code repository (git really is an excellent content management system). The dynamic form is a dev server that runs locally and re-renders changes in real-time, and the static form is the built source code output to static assets.

\n

Another popular typology is the "Wordpress Site". A classic in web development. Here, the ontology is a MySQL database who's structure is invented ad-hoc by the developer. The editor interface is a PHP (and React I guess with Wordpress 5) web application that allows for manipulation of the MySQL database. The dynamic form is the "preview" of database changes or a "staging" environment for code and data, whole the static form is determined by PHP templates that fetch data form the database and interpolate pages at run time, on each request.

\n

The "Shopify" is structurally the same as the "Wordpress", but swap PHP for Ruby like a good early 2000's startup and make the Ontology a pre-determined e-commerce structure. I think this is the dominant web app disposition, with a range of opinions on how much should happen on the server and how much should happen on the client.

\n

Two other typologies I think are worth exploring, the "Notebook" and the "Data Driven Web App". The Notebook, like Observable positions the ontology as an unstructured document. The editor interface is a word processor app for that document, and the rendered form is a combination of pre-set app framework and the contents of the document. Dynamic form is a draft state of the document, static form is a published state. Notebooks are very interesting and different, and Observable is a great example of one. Since we're in the neighborhood of Mike Bostock, let's talk about "Data Driven Applications". The ontology is the data doing the driving. The rendered form is source code for visualizing the data — dynamic locally run while editing code, static is hosted on a server or cdn. The editor interface is relinquished to whatever real-world process governs the collection of the data.

\n

Each typology is a powerful conception of what a web app can be, and each one has a unique and distinct perspective on the three important parts of what a web app is. No one is better than the others, since they each have a different relationship. And a different definition of good.

\n

The shape of my two problems that were actually one problem gave me an idea for a new kind of web app typology, one that borrows from the Semantic Web and real-time web apps. The dispositions and technical behavior of the app would look something like this:

\n

1: The Ontology

\n

This kind of app will use off-the-shelf RFDA or JSON-LD ontologies. These ontologies can be extended or created, but must be valid RDFA or JSON-LD (either is fine, since they can be machine translated into each other). This allows for deeply semantic structures, machine readable relationships, and lossless data transfer between systems. Structuring this data as a graph allows for narrow, tailored consumption of the dat via GraphQL without dedicated API development and maintenance. It also allows for the entire data system to be visualized.

\n

Using these semantic ontologies rather than providing a blank slate and letting the data structures grow on an ad-hoc basis also saves a lot of time in creating the data models; since it's all semantic and robot friendly a single URI to the ontology should be enough to populate all the content models any given editor interface might need.

\n

2: The Rendered Form

\n

The guiding principle of thee rendered form is to be as light as possible over the wire, for both the dynamic and static forms.

\n

The rendered form splits the difference between a JAMstack real-time application and a static site. The dynamic form is hosted as a web app, and subscribes to real-time changes in the ontology documents. As the data changes, the dynamic form updates to reflect it. This takes the development build off of the local machine and puts it where more than one person can see it at a time. The static form is a built collection of static html files that an be hosted from any CDN to the wider audience.

\n

3: The Editor Interface

\n

The editor is completely separate from the renderer, but a common protocol unites it with the renderer on one end and the ontology on the other. Load the editor on any given dynamic rendered page to get an experience where any given property of the ontology can be edited, written to the database, and the effects seen immediately in real-time by anyone connected to the dynamic render. The diffs can then be stashed, discarded, or published.

\n

Conclusion

\n

The end result of an app like this would be an experience sort of like Glitch, sort of like Squarespace and sort of like Observable.

\n

With permissions around what rendering or editing app can see or touch what in the database, its possible and even important that any given concept that relates to the system can be represented in the system — either the data itself of a metadata record that is indexical to the data. This allows the entire system to be meaningfully connected, which enables solutions for many common problems (uri mapping, incompatable data structures, duplicated databases, nightmare migrations, vendor lock in) and opens the door to new use cases and implementations, like bespoke CRMs seamlessly integrated into a product or an ecosystem of editing experiences that are completely independant of any given renderer or ontology.

\n

A clumsy handle for this kind of app could be a Semantic Mono-Database. Not as catchy as JAMstack, SPA, or Static Site. We'll get there tho, I'm sure a meaningful name will present itself as I work to build the first real one of these things.

\n"},{"meta":{"title":"Factorial! Race!","description":"

Finding an upper limit on factorials in JavaScript

\n","date":"2021.06.07","slug":"factorial-race","collection":"texts","timestamp":1623049200000},"content":"

For the past while now, I've been tinkering on a side project that builds and graphs arbitrary probability distributions created by dice rolling outcomes. It's extremely niche and dorky, but it's a been a really fun way to explore both product design development and new concepts in math and programming that have otherwise never presented themselves during my career.

\n

This is an article about the second bit.

\n

One of the interesting things that I discovered early on was that when adding the ability to multiply and reduce dice rather than just multiply or reduce dice (ie; roll 1d6, roll the resulting number of dice, on a 4 or higher roll again, sum the total results. Complicated!) the distributions are not normal, which means in order to actually graph the distribution we need to calculate every possible outcome. Not a big deal, since computers are good at this sort of stuff!

\n

However, I quickly discovered an upper bound: the calculation requires working with factorials. When the factorials get big, JavaScript gives up and returns Infinity. This is because there is a maximum limit to the size of a double-precision floating-point number that JS uses for the Number type. Wow, this got both mathy and programmery really quickly.

\n

This gave us an upper bound of 170!, since the rest of the distribution calculations don't like it when you pass them Infinity.

\n
const factorial = (x) => x > 1 ? x * factorial(x - 1) : 1\nfactorial(170) // 7.257415615307994e+306\nfactorial(171) // Infinity\n
\n

Lucky for us, JavaScript has implemented a Big Integer type for integers that are … well … big! I was able to refactor my factorial function to use BigInts.

\n
const big_factorial = (x) => x > 1 ? x * big_factorial(x - BigInt(1)) : BigInt(1)\nbig_factorial(BigInt(171)) // 1241018070217667823424840524103103992616605577501693185388951803611996075221691752992751978120487585576464959501670387052809889858690710767331242032218484364310473577889968548278290754541561964852153468318044293239598173696899657235903947616152278558180061176365108428800000000000000000000000000000000000000000n\n
\n

So what's our new upper bound? We can handle 170! easily, how high can we go? 1,000!? 10,000!?

\n
big_factorial(BigInt(1000)) // 402387260…0n\nbig_factorial(BigInt(10000)) // RangeError: Maximum call stack size exceeded\n
\n

Turns out 1,000! is a walk in the park. 10,000! gets a little more interesting! The error that returns on the function is about too much recursion. We're calling big_factorial from big_factorial ten thousand times and the browser thinks this means something is wrong, so it bails out on the process.

\n

So, what if we refactor our recursive big_factorial to use a loop?

\n
const big_fast_factorial = (x) => {\n  let r = BigInt(1);\n  for (let i = BigInt(2); i <= x; i++)\n    r = r * i;\n  return r;\n}\nbig_fast_factorial(BigInt(10000)) // 284625…0n in about 70ms\n
\n

10,000! is fast! We can get reliably get the result of that in less than 100ms. And since our loop will run as long as it needs to, our upper bound should now be based on compute and return time, rather than type errors or browser guardrails. Lets see what we can do now:

\n
big_fast_factorial(BigInt(20000)) // ~300ms\nbig_fast_factorial(BigInt(30000)) // ~650ms\n// …\nbig_fast_factorial(BigInt(90000)) // ~7238ms\nbig_fast_factorial(BigInt(100000)) // ~9266ms\n
\n

Things … start to get slow above 30 or 40 thousand factorial. Every additional ten thousand to our initial number adds more and more time to the compute function. Im sure theres some fancy O(n) complexity notation to express this, but I don't really want to figure that out. It's too slow to use an in a UI above say, 50,000!.

\n

Turns out tho, even mathematicians don't really calculate factorials this big. They use Stirlings' Approximation instead, since it's faster and "good enough". It looks sort of like this:

\n
𝑒ln(𝑛!) = 𝑛!\nwhere\nln(𝑛!) = ∑𝑛𝑘=1(ln𝑛)\n
\n

It would be pretty cool to do this in JavaScript! And personally, I love "good enough". I've already got a handy function for running Big Sigma calculations:

\n
const new_arr = (s,n) => {\n  if (s < 0 || n < 0) { return []}\n  return Array.from(Array(n+1).keys()).filter(v => v > s-1)\n}\n\nconst Σ = (min, max, fn) => new_arr(min, max).reduce((acc, val) => fn(val) + acc, 0)\n
\n

So lets try this out:

\n
const log = (x) => Math.log(x)\nMath.exp(Σ(1, 1000000, log)) // Infinity\n
\n

Oh no! The end result of our 1,000,000! function is still Infinity. Thats because one million factorial is … very big. It could still fit into a BigInt, but then we have another problem: we cant run Math functions on the BigInt type. And we can't rewrite the functions to use BigInts because the type is, by definition, only for integers. and 𝑒 is definitely not an integer. Even a math library like math.js has the same issues around typing, despite trying to account for it.

\n

Naturally, this leads to a simple proposal: FaaStorials! Fast Factorials as a Service! Since factorials are immutable, it should be possible to store the first 1,000,000 or so in a database, and provide an API for querying and returning them. Even a slow network request would be faster than computing the factorial locally. It should be possible to crunch (slowly) all the factorials, and write them to a database for retrieval on demand. I wrote this function and got about 7,000 rows written before I realized it would probably be expensive.

\n

According to my rough estimating, 1,000,000! would send a response that weights about 700kb, and the whole database would be in the neighborhood of 350gb. This would cost me about $80 a month to store, maybe $100 a month to pay for the requests as well. I pulled the plug on the script.

\n

As with many problems, the upper bound ends up being defined by time and money, the end!

\n"},{"meta":{"title":"Headless CMS; A Brief Introduction","description":"

What is a Headless CMS, and how can it be useful for building websites and apps?

\n","date":"2021.05.28","slug":"headless-cms-introduction","collection":"texts","timestamp":1622185200000},"content":"

A Content Management System is a central aspect of any web project - even projects where you would never think of as "having" or "using" a CMS. In these sorts of projects the content management system is the codebase, and the strategy for managing content is identical to the process for managing code. This, obviously, is not ideal for anyone who wants to edit content and not code, or in uncomfortable in the workflows that developers rely on for our day-to-day practice — like Git. The other end of the spectrum of CMS is something like Squarespace — the code is the content. Not ideal if you want to edit code and not content. A traditional CMS like Wordpress attempts to split the difference; the CMS controls the code and the content, but makes an attempt to keep them at least a little independant by storage content in a database and providing an admin interface for editing and authoring that content.

\n

All of the above approach the problem of content management with the same set of assumptions: the CMS is responsible for taking the content, combining it with the code, and assembling and delivering the entire website. Both parts are coupled together, with the CMS rendering the "head" of the website or app.

\n

In the past few years with rise of build-time generated and static sites, a new approach to this problem has been articulated and built by a number of companies. The basic idea underlying this new approach is that the CMS should only be in charge of content, and interacted with like any other API. This decouples the code from the content, and removes the CMS from any responsibility of interacting with the code at all. This style of CMS does not render anything out to the web, and is thus called a headless CMS. In short, a headless CMS has no website out of the box. This means there is not a default theme (there are no themes!) to configure, there is no classic "blog" visuals or interfaces to configure and map too content. A blog is just one of many things a headless CMS can be used for.

\n

A headless CMS has a number of advantages: the first among them is that the product in charge of managing content can focus solely on managing content, and be very, very good at authoring, creating, and editing without needing to also be a good tool for building web apps with. The second biggest advantage is it provides the development team with complete freedom to meet the real-world business use cases associated with the project without relying on the CMS to support those use cases.

\n

Search Engine Optimization is an excellent example of these two characteristics at work – we are completely free to implement any SEO improvements without any support from the CMS, because the CMS doesn't do anything but manage content. SEO tags and page metadata can become content like any other content, and the codebase of the web app is responsible for rendering the actual website as it goes over the wire and gets consumed by browsers. Instead of relying on Wordpress Plugins or trusting that Squarespace is following best practices, all of the implementation details over your SEO strategy is completely in your teams control — just like any project without a CMS integration — while the content itself is entirely in your strategy or marketing teams control.

\n

A Headless CMS exposes content via an API, and that's all it does.

\n

Contentful provides a set of client libraries that allow content to be consumed in a developers language of choice, meaning that the technologies or systems used to render your app can be selected by any set of criteria at hand, rather than being forced into a decision by the CMS – if you use Wordpress you're writing your app in PHP within Wordpress. With Contentful you can write your app in Go if you want and live your best life.

\n

Below is a quick overview of using the Contentful JavaScript SDK to access content in the Headless CMS. It returns JSON and can be used at run-time or build-time to add content to a website or app:

\n
const contentful = require('contentful')\n\nconst client = contentful.createClient({\n  space: <space-id>,\n  accessToken: <access-token>,\n  host: <host>\n})\n
\n

The client provides a set of methods for interacting and querying the content database;

\n
 getSpace: async ƒ getSpace(),\n  getContentType: async ƒ getContentType(),\n  getContentTypes: async ƒ getContentTypes(),\n  getEntry: async ƒ getEntry(),\n  getEntries: async ƒ getEntries(),\n  getAsset: async ƒ getAsset(),\n  getAssets: async ƒ getAssets(),\n  createAssetKey: async ƒ createAssetKey(),\n  getLocales: async ƒ getLocales(),\n  parseEntries: ƒ parseEntries(),\n  sync: async ƒ sync()\n
\n

You can use the getEntries function to get all the entries available:

\n
client.getEntries()\n  .then(entries => {\n    console.log(entries)\n  })\n
\n

Or query on metadata or content:

\n
  client.getEntries({\n    content_type: 'lesson',\n    'fields.slug[in]': 'content-management'\n  }).then(entries => {\n    console.log(entries)\n  })\n
\n

Contentful in particular is interesting because one of the fields you can add to your entries is a reference to other entries. This gives the information architecture model some pretty amazing abilities to structure and enable pretty much any sort of content strategy you want to build. Some simple key-value pairs for getting strings to complicated, nested, conditional data structures.

\n

And from the code's perspective, it's all just JSON!

\n

For an example of how one can write components that consume this general API data, I've put together a small sample of how to create a component that's defined by JSON structures, and how handling different configuration keys alongside content strings can create a powerful way to integrate with a Headless CMS like Contentful. Check out the demo on Glitch.

\n"},{"meta":{"title":"Towards an Ethical Web Development","description":"

Thinking about what it means for an industry to determine a moral standard of practice.

\n","date":"2020.09.01","slug":"an-ethical-web-development","collection":"texts","timestamp":1598943600000},"content":"

Since the beginning of this summer, with everything that’s descending upon us with Covid-19 and the Black Lives Matter uprising, it feels like we are living through a moment of moral accounting. In Portland, antifa is in the street in running battles with secret police, exploited workers are speaking up about the realities of our treasured restaurant industry, business owners are shutting down and pulling a disappearing act instead of facing accountability for their behavior. This has me thinking about my industry, which we all know has massive problems around racism, techno-fascism, and robber-baron level exploitation. We’re still dealing with people who read Snow Crash and Neuromancer and think those books were descriptions of Utopias.

\n

There are a lot of people doing hard work right now to address these issues in the industry, working to identify how we can — as businesses — move toward a more just system of working, how we need to avoid baking our prejudices in to the AIs we build, and how the physical underpinning of the internet is built on an exploitative and extractive logic of global capital. This is all good and necessary work. It makes me think though, about if there are distinctions between the craft and practice of web development and the business logic and drive the industry. The industry vs the practice - as in the technical skill of painting vs the economic system of patronage. Clearly they are related, and clearly our conception of painting has always been tied to the economic structures that make it a reality as a profession, but is there a way to think about an Ethical Web Development? It would be tied to running an ethical business, and necessarily need to be supporting by an ethical economic system, but how could we articulate what it would look like the perform the craft and practice of web development ethically? What would an antiracist and antifascist web development practice look like?s

\n

Recently we were visiting my parents in Corvallis, and as it happens my moms partner is Michal Nelson - a moral philosopher who specializes in ecological and environmental ethics. So I asked him if there were any frameworks for determining if a given industry (his discipline focuses on forestry and resource management for example) was acting “ethically”. How does an industry set its own standard of ethical behavior? He explained that acting “ethically” is essentially just staying internally consistent to a set of values that you’ve articulated — and in this way being “ethical” or “sustainable” doesn’t inherently hold any value. Many people or entities agree on the importance of acting ethically, but their baseline for what the core concepts actually mean, what they are talking about when they mean “harm” for example can vary wildly.

\n

An ethical framework is a tool to reduce the possibility space - to help determine what choices available should be taken rather than could be taken. It creates a heuristic for determining which actions will cultivate an environment that supports a set of desired values. Determining ethical considerations becomes a design problem — what values do we want to see in the world? What outcomes do we want and why? Who are we considering, who are we not considering? What are the edges and the limitations beyond which we decide not to concern ourselves?

\n

There are many ethics available to work from, and conflating a single ethic with the entire range of possible ethics can be a problem. If we assume that a utilitarian, individualist ethic is the only possible ethic we can work with (cough cough economics cough cough) it will necessarily preclude a whole range of outcomes that just aren’t possible to arrive at from those starting conditions. Another ethic based on compassion, animism, and collectivism would start with wildly different axioms and naturally arrive at different outcomes. This debate and tension is at play today among the people who study and try and outline what a sustainable future of forest management is - it’s an ongoing conversation (argument?) between people who don’t share the same basic axioms around what they are trying to talk about.

\n

Perhaps the core of any ethic is “harm reduction”. We can see this in the Hippocratic oath and the medical ethic — first do no harm. The issue is that “harm” is not a natural, discoverable property of the natural world. Harm is necessarily framed as a value judgment, the corollary to the idea of "good in the expanded field". We can outline the structural requirements and definitions of harm, and use those value judgments to identify harm in the world, but it’s still a human judgement system being used to sort and order the world. The medical world to illustrate this, we can look at the contemporary prerogative of “informed consent”. We now define lack agency over medical decisions as a harm done to an individual. This has not always been the case.

\n

This starts to get at the ecological connections between the craft and the business and web development - we can easily frame accessibility as a fundamental requirement of an ethical web development practice in that it reduces the harm of excluding individuals from our work. But we can arrive at that principle from the logic of many different ethics, some of which could be wildly contradictory. From a utilitarian, efficiency and profit maximizing ethic, creating accessible web apps is ethical because not doing so would be to leave money on the table. From a humanistic and compassionate ethic, it’s accessible because it fosters inclusion and equity. So we see accessibility and a clearly defined ethical practice, but that doesn’t mean that we all agree on why accessibility is ethical.

\n

I think in order for us to define what we mean when we try and define how our industry can be ethical, we need to work through a few steps:

\n
    \n
  1. What values do we want to encourage and foster in the environment?
  2. \n
  3. Where do we draw the distinction between things we are concerned with and things we are not?
  4. \n
  5. How do we determine who is effected by our ethic and who is not?
  6. \n
  7. How do we then define the rules for determine which actions we should take and which actions we should not take?
  8. \n
\n

If, as an industry and as individuals, we can have these conversations then we can start to come to terms with what it means to work every day in a world where we are actively supporting and enriching the worlds first trillionaire. Is an ethical web development one where we must boycott AWS? How do we feel about data centers in general? What about ISPs, what about underwater cables?

\n

Jaya Saxena's recent piece in Eater clearly identifies the problems associated with acting ethically as an individual — or even an industry — while remaining a part of a society that systemically undermines that ethic through structural design.

\n
\n

Building an equitable restaurant, a place where all workers are paid fairly, have benefits, and can work in an anti-discriminatory environment, is going to take a near-undoing of the way most restaurants are run.

\n
\n

Saxena examines some current models for employee-owned and cooperative businesses, and privately owned business that actively choose equality and community over profit. She identifies that is a compromise in these enterprises, and sums up the system issues at hand by concussing that "… when it comes to restaurants, it’s hard to change one thing unless you’re changing everything."

\n

There are systemic forces at work that prevent any individual, or even any small community from truly reaching a place of ethical behavior. This makes me think that there has to be a split between "acting ethically" and "being ethical". We can all act ethically, working our way upstream against the system forces arrayed against us, but that's no guarantee that we will, at the end of the day, be ethical.

\n

The essay identifies one restaurant and farm that solves their ethical crisis by charing $195 per person per meal, and frames that as a choice that consumers get to make. This is striking. We live in a time of unprecedented efficiency, unbelievable abundance, and massive wealth but if a restaurant is called to truly account for its exploitation to charge its true price, it's immediately untenable. This feels like it must be true across many industries – Uber would rather cease service in California than treat it's drivers like employees. What would it take to truly understand what the network of costs, values, debts, and the real price of things?

\n

Would an ethical web development be able to account for that cost and still be able to be a business in our society? During my early courses in fine art printmaking at University, I was taught that a blank sheet of paper had value on its own. Not only the price attached to it (steep, for nice paper) but also the work and craft that went in to making it. One had to be sure that the image we we're impressing on the blank sheet of paper added value to it that than reduced it. Our work had to be more valuable than the paper, and if it wasn't we didn't make it.

\n

Ingrid Burrington has written extensively about the physical realities of the internet, and what it means to turn the raw stuff of the earth into the objects we need to make computers. She's even turned computers back into raw stuff. It's hard to confront the reality of an open pit lithium mine and conclude that needs must for better batteries.

\n

Can web development be ethical? Maybe not. But that doesn't mean that we don't have an obligation to act ethically. If we can articulate the ethic we want to have in our industry, and stay internally consistent to those principles in an effort to manifest values we want in the world, maybe that's enough. Or a start anyway.

\n"},{"meta":{"title":"Fun With JSON-LD","slug":"fun-with-json-ld","description":"

Learning about JSON-LD is all about and why we should.

\n","date":"2020.08.25","collection":"texts","timestamp":1598338800000},"content":"

Working with Adam Riemer on SmugMug's SEO has been a really illuminating experience. SEO consulting has always been flagged in my mind as "Snake Oil Business", but Adam really is the best in the field. Almost all of his SEO suggestions focus on performance and accessibility, and he has some clear, hard metrics to define "good". This squares with my fundamental understanding of good SEO practices, and has broadened my horizons and understanding of of the practice.

\n

Something that Adam introduced me to is JSON-LD – a way of creating structured metadata for pages that's more explicit that microdata formats. Here's what I've learned about JSON-LD so far.

\n

JSON-LD is Google's preferred format for accurately and succinctly structuring metadata for pages. This gives them insight into what's on your page and why, and they use The Algorithm to interact and consume this data. Using their standards gives you the opportunity to get top, fancy search results but there's no guarantee of that. The best thing to do is to use your structured data to give the best, more accurate, and complete picture of what content your page has for your audience. Trying to game SEO here is probably going to backfire, just describe things as they are as clearly as possible.

\n

The primary purpose of structured data is to create a machine-readable and algorithm friendly metadata for your content. This allows the content to be consumed by the crawlers and the robots, and join in the mesh of content that Google exposes to users when they perform searches or ask questions of it.

\n

Clearly this is a double-edged proposition. By using structured data you're explicitly buying in to the ecosystem that Google is creating, and allowing your content to be trawled and used and understood however they want. You undoubtable end up providing value to Google in excess to what they are providing to you. Not to mention participating in the project of making the world machine-readable, which has it's own philosophical freight.

\n

Schema.org has a lot of data types that might be appropriate for your project: Articles, Books, Breadcrumbs, Carousel, Course, Critic Review, Dataset, Event, How-to, Local Business, Movie, Podcast, Product, Software App, and Video are all ones that look interesting to me.

\n

For something like this site, we're using pretty much entirely Website and Article – and connect them with a CollectionPage and a Person who is me! Maybe some of the art will be a CreativeWork.

\n

Some information on these types:

\n\n

Lets work through Google's example of an article, maybe for this article!

\n

Here's the script tag that is home to our structured data:

\n
<script type="application/ld+json">\n…\n</script>\n
\n

We fill it with a JSON object that describes our data structure:

\n
{\n  "@context": "https://schema.org",\n  "@type": "Article",\n  "headline": "Article headline",\n  "datePublished": "2020-08-25T16:42:53.786Z",\n  "dateModified": "2020-08-25T16:42:53.786Z"\n}\n
\n

The @context key clues the robot in to the data definition we're going to be using, which is the schema.org definitions. The @type tag associates the following data with the pre-defined structure. From there on it's relevant data! headline, datePublished and dateModified are all directly pulled from the content itself. In out case our data looks like this:

\n
{\n  "@context": "https://schema.org",\n  "@type": "Article",\n  "headline": "Fun With JSON-LD",\n  "datePublished": "2020-08-12T08:00:00+08:00",\n  "dateModified": "2020-08-12T08:00:00+08:00"\n}\n
\n

Open question: BlogPosting or Article? Im going to stick with BlogPosting since these texts are really just that. I would use Article if I was writing a news piece or a review, or something maybe more scholarly.

\n

The last required field is an image:

\n
\n

For best results, provide multiple high-resolution images (minimum of 300,000 pixels when multiplying width and height) with the following aspect ratios: 16x9, 4x3, and 1x1.

\n
\n
{\n\t…\n  "image": [\n    "https://example.com/photos/1x1/photo.jpg",\n    "https://example.com/photos/4x3/photo.jpg",\n    "https://example.com/photos/16x9/photo.jpg"\n  ]\n}\n
\n

This means that creating thumbnails for every Article is important, and those images need to exist on the page in a way that user can see.

\n

For this site, the main use of these images is going to be for sharing thumbnails. The fact that the image needs to be on the pages is interesting, since that really influences the design of the page. I've found that creating the necessity for a prominent thumbnail or hero image that accompanies each article is a recipe for a) not writing articles and b) bland stock photography. I want to avoid both. That means for this site I'm going to do illustrated images, small sketches and motif explorations that may or may not illustrate the article, and attach it to the bottom of the article.

\n

There are two other sections I want to look at, even though they are not requirements according to Google. These are the author and the publisher fields. The goal of using these fields is to create an association between you and your work; or in the case of the publisher field between an imprint entity and the creative works they've published. In our use case for this site, my goal is to create a machine-readable entity that is 'Nikolas Wise' and attach my articles and my work to that, in order to create a coherent entity that is exposed to the broader web.

\n

The author field is a Person or an Organization, the publisher field is an Organization. Lets start with Person:

\n
\n

A person (alive, dead, undead, or fictional).\nhttps://schema.org/Person

\n
\n

It gets added to our LSON-LD like this:

\n
{\n\t…\n  "author": {\n\t  "@type": "Person",\n\t  …\n  }\n}\n
\n

There are a lot of properties in this schema, like deathPlace and knows. One could really get into this and make it a very robust and complete data object, but I'm not sure how much value that would bring at the end of the day. There's a fine line between following specs and best practices to achieve a goal and ticking boxes to structure our lives solely in order to make them legible to the algorithm. I guess we each decide where that line is for ourselves.

\n

For me, I'm going to stick with name, url, image, jobTitle, knowsLanguage, and sameAs. Although publishingPrinciples seems interesting, and I might write one of those.

\n

Most of the fields are simple text strings, and can get filled out like so:

\n
{\n\t…\n  "author": {\n\t  "@type": "Person",\n\t\t"name": "Nikolas Wise",\n\t\t"url": "https://nikolas.ws",\n\t\t"image": "https://photos.smugmug.com/Portraits/i-ThnJCF5/0/f9013fdc/X4/wise-X4.jpg",\n\t\t"jobTitle": "Web Developer",\n\t\t"knowsLanguage": "en, fr",\n\t\t"sameAs": …,\n  }\n}\n
\n

The language codes are from the language code spec, and could also be language schema objects. The job title could be a Defined Term schema object.

\n

The sameAs key is an interesting one, it's either a URL or an array of URLs that connect this @person with other parts of the web that are also that @person.

\n
{\n\t…\n  "@person": {\n\t  …\n\t\t"sameAs": [\n\t\t\t"https://twitter.com/nikolaswise",\n\t\t\t"https://github.com/nikolaswise",\n\t\t\t"https://www.instagram.com/nikolaswise/",\n\t\t\t"https://www.linkedin.com/in/nikolas-wise-6b170265/",\n\t\t],\n  }\n}\n
\n

This will connect "me" with this site and my twitter, github, instagram, and linkedin profiles. Those are the pages that I want to the algorithm to associate with "me".

\n

@organization is similar to @person in a lot of ways, and the fundamental idea is the same. The goal is to create a single entity that the algorithm can connect disparate pages and items too. I'm not going to set of an @organization here, but the the @organization schema type has the spec for the object.

\n

So that's it! That means the entire JSON-LD for this article – and therefor the rest of the texts as well, looks like this:

\n
<script type="application/ld+json">\n\t{\n\t  "@context": "https://schema.org",\n\t  "@type": "Article",\n\t  "headline": "Article headline",\n\t  "datePublished": "2020-08-25T16:42:53.786Z",\n\t  "dateModified": "2020-08-25T16:42:53.786Z",\n\t  "image": [\n\t    "https://example.com/photos/1x1/photo.jpg",\n\t    "https://example.com/photos/4x3/photo.jpg",\n\t    "https://example.com/photos/16x9/photo.jpg"\n\t  ],\n\t  "author": {\n\t\t  "@type": "Person",\n\t\t\t"name": "Nikolas Wise",\n\t\t\t"url": "https://nikolas.ws",\n\t\t\t"image": "https://photos.smugmug.com/Portraits/i-ThnJCF5/0/f9013fdc/X4/wise-X4.jpg",\n\t\t\t"jobTitle": "Web Developer",\n\t\t\t"knowsLanguage": "en, fr",\n\t\t\t"sameAs": [\n\t\t\t\t"https://twitter.com/nikolaswise",\n\t\t\t\t"https://github.com/nikolaswise",\n\t\t\t\t"https://www.instagram.com/nikolaswise/",\n\t\t\t\t"https://www.linkedin.com/in/nikolas-wise-6b170265/",\n\t\t\t],\n\t  }\n\t}\n</script>\n
\n"},{"meta":{"title":"Pressing Words, With Your Friend, Wordpress","slug":"wordpress-but-not-terrible","date":"2018.10.24","description":"

A contemporary developers guide to building things on Wordpress 4.x and not having it be terrible.

\n","collection":"texts","timestamp":1540364400000},"content":"

TL:DR; Start here. Install this thing and connect it to your account on here. Buy a license of this (it's worth it). Read some docs for this and start building. Wordpress 5 and Gutenberg will probably break all of this except the environments.

\n

When I first started working as a developer, Wordpress was the prevalent platform for pretty much any project. Ten years later and … Wordpress is still pretty much most of the internet. In general, Wordpress will be my last choice of a platform. I prefer to build static sites, use a headless CMS, or almost anything else at all.

\n

That said, as the Technical Director at Fuzzco — a design studio that relies almost exclusively on Wordpress for their websites — Wordpress was happening. Fuzzco is rare among studios in that we manage and host projects for our clients, and often have maintenance riders that can last for years. This means that in the course of a year, not only did we build a half dozen new projects on Wordpress, but we maintained and triaged issues on over 100 legacy projects.

\n

Very quickly I realized I had one option: make Wordpress not terrible.

\n

Terrible is pretty harsh

\n

If you're comfortable with Wordpress, you might find some fightin' words here. What's my problem with Wordpress and what am I trying to solve for? My biggest issue with Wordpress development as I've encountered it in the past is a lack of clarity around the requirements of the entire system. What does the project need to run in an environment, and why? How do we move from a repository to a local environment and start working on a codebase? How does that codebase get deployed to a server?

\n

I've seen Wordpress systems that are frozen in time in 2006 — FTP to the server and edit a CSS file on production, or "deploy" your theme by uploading a .zip. I'm interested in how we can lower the cognitive overhead for getting a Wordpress project up and running, and join in with pre-processing, compiling, containerizing, testing, and all the really excellent things that we've come to expect from our web stacks over the past few years.

\n

Another issue I have with Wordpress is its commitment to auto-magical routes and rendering templates with obscure and complicated .php patterns that basically concatenate strings. I'm interested in explicit routes — either hard-coded or parameterized — and separating concerns between logic and template.

\n

A lot of this boils down to a disagreement between what Wordpress thinks a site should be and what I end up using it for. Wordpress as designed distinguishes between your "site" and your "theme". Your "site" is the content in the database, the options you've saved, and the menus and widgets you've installed. It expects "themes" to be presentations of this real website stuff. This model of websites perpetuates that "design" is something that can be applied over a website, a kind of dressing up of the real things. This is the inverse, and perhaps a corollary to, the concept that designing a website is just deciding what it looks like. It's an idea that lives within the system of silos between design and development, and that we can "design" a website in Photoshop or Sketch and hand off the comps to a developer to build it. Which is how a lot of Wordpress projects are built.

\n

In short, I disagree with this concept of websites. My position is that designing a website is both how it looks, how it works, and how the data and structures are composed. Taking this approach, controlling the object models, the information architectures, and the templates are all of equal importance. In my line of work, a Wordpress theme can not be applied to any other site than the one that it was designed for, a site where the structure was designed for the theme.

\n

So why use Wordpress?

\n

There are still a number of really good, compelling reasons to use Wordpress as a platform for building websites. It's got a robust built-in commenting system with user accounts. It's really good for things that are shaped like blogs. It's got a huge, well-maintained ecosystem of plugins. It's free. And since it's most of the Internet, clients are really, really comfortable with it.

\n

There are a couple of reasons not to use Wordpress right now. Mostly these center around the impending release of Wordpress 5.0 and the Gutenberg editor, which has a number of concerns around plugin compatibility and accessibility for authors.

\n

But that's okay, since we've decided to use Wordpress 4.x. As we all know, picking a version of Wordpress and then never upgrading it is one of the time honored traditions of Wordpress development.

\n

How does this work even

\n

Let's start at the end.

\n

We're going to be hosting our production Wordpress site on a Digital Ocean droplet — the smallest one they have — for $5 per month. Depending on the project lifecycle, we can set up more droplets for a staging server and a development server. At Fuzzco we used dev servers to show sites to the internal team, staging servers to show sites to the client, and production servers to show sites to the public.

\n

I don't know about you, but I personally don't super love managing my virtual private servers manually. In order to deploy our codebases to Digital Ocean we'll use the phenomenal tool Nanobox. Nanobox is an operations layer that handles containerizing applications and deploying them agnostically to a cloud service provider. Nanobox will deploy our code from the command line to any one of our droplets.

\n

Nanobox will also containerize and run an application in a virtual machine locally. This means we'll use it to run our development environment, and ensure that all of our environments are identical. No more worrying about PHP versions and extensions and plugins. No more running MAMP or MySQL or Apache or whatever on your local machine before anything works. Nanobox defines the server in a .yaml file, and it will always be the same. It also handles all the syncing between our local disk and our virtual environment.

\n

So now that we know how our code is going from local to production, we can think for a second about how it's going to do that, and how we're going to manage our data.

\n

The database on the production server is "canonical". That means that the database the client interacts with is the one true database, and we must treat it with care and attention. We'll never change that database ourselves, and we'll move that database downstream from production to staging to dev to local in order to develop against our real data. Importantly, we don't want to migrate the database manually either. It's a little expensive but using Migrate DB Pro is an incredible resource for this part. I guess one could also look into alternatives for personal projects.

\n

The canonical codebase lives in version control, and moves the other direction. From Github to local to dev to staging to production, amen. The only things we need to track in version control are what makes our project unique. Practically, this means we need to track our theme and our plugins. Wordpress core files are not special, and we should not fill our repositories with them.

\n

Getting started

\n

At this point it's worth getting started with Nanobox. I back the containers with VirtualBox, since at the time I started this it was slightly more stable than Docker on MacOS High Sierra. Once Nanobox & Virtualbox/Docker is installed, set up Digital Ocean as your provider. Once that's done, we have everything we need to get started!

\n

I'll be talking through a project I built in order to facilitate building other projects. This will be more intense than you might need for a single build, but this was designed a tool that anyone can use to get started quickly. Here's the basic structure of our repo:

\n
📁 /project-name\n⮑ 📄 .gitignore    # includes /wp\n⮑ 📄 package.json  # tooling lives here\n⮑ 📄 readme.md     # be nice, write docs    \n⮑ 📁 theme         # our theme codebase\n⮑ 📁 plugins       # vendor plugins\n⮑ 📁 scripts       # some helpers\n
\n

The crux of the project is our boxfile.yml configuration file. This is what Nanobox uses to create our containers. It looks like this!

\n
# /boxfile.yml                \nrun.config:                    # \n  engine: php                  #\n  engine.config:               #\n    runtime: php-7.0           # Defines PHP version\n    document_root: 'wp/'       # Dir to serve app from\n    extensions:                # PHP extensions we need\n      - gd                     #\n      - mysqli                 #\n      - curl                   #\n      - zlib                   #\n      - ctype                  #\n                               #\nweb.wp:                        #\n  start: php-server            #\n  network_dirs:                #\n    data.storage:              #\n      - wp/wp-content/uploads/ #\ndata.db:                       #\n  image: nanobox/mysql:5.6     # Nanobox DB magic\n                               #\ndata.storage:                  #\n  image: nanobox/unfs:0.9      #\n
\n

As noted above, we'll be serving our entire installation of Wordpress in the /wp directory. This will hold all the Wordpress core files and compiled theme code, none of of which we need or want in version control. As such, make sure this is listed alongside node_modules in the .gitignore.

\n

Since we've decided that we don't want to track these files, but we need them to actually have a project, we can write a helper script to take care of the gap between those two ideas.

\n

Here are the scripts we're going to write to help us handle this process:

\n
📁 /project-name\n⮑ 📁 scripts\n   ⮑ 📄 check-install.sh # Installs Wordpress core files.\n   ⮑ 📄 init.sh          # Runs our setup helper.\n   ⮑ 📄 prestart.sh      # Checks if we need to init.\n   ⮑ 📄 setup.js         # Cute lil' CLI helper.\n
\n

The first thing we'll do is write a script that checks if /wp exists. If it doesn't, throw an error that we need to initialize the project since we don't have any of the core files we need.

\n
# prestart.sh\n#!/bin/bash\necho 'Check to make sure wordpress is here at all'\nif test -d ./wp/\nthen\n  echo 'yup we good'\n  exit 0\nelse\n  echo 'Project not initialized: Run `$ npm run init`'\n  exit 1\nfi\n
\n

I'm calling this prestart because I want to run it before npm start. Many times I'll be on autopilot, and after cloning a repo simply run npm install and npm start. This interrupts that process and lets me know I need a third step, npm run init. Let's put this in our package.json scripts:

\n
# package.json\n{\n  ...\n  "scripts": {\n    ...\n    "init": "./scripts/init.sh",\n    "prestart": "./scripts/prestart.sh",\n    "start": "npm run dev"\n  }\n  ...\n}\n
\n

We'll get to our dev tooling later. Lets take a look at what our init.sh script does:

\n
# init.sh\n#!/bin/bash\nnode ./scripts/setup.js  \n
\n

Not much! This just runs our setup CLI helper. You might not need all this, but since I built this system to help a team of developers work on many many projects you're gonna get it anyway.

\n
// setup.js\n\n// some nice deps for making a CLI.\nconst prompt = require('prompt')\nconst exec = require('child_process').exec\nconst colors = require("colors/safe")\n\n// Run and log a bash command\nconst bash = cmd => {\n  msg('green', `Running: ${cmd}`)\n  return new Promise(function(resolve, reject) {\n    exec(cmd, (err, stdout, stderr) => {\n      if (err) reject(err)\n      resolve(stdout, stderr)\n    })\n  });\n}\n\n// Log a message\nconst msg = (color, text) => {\n  console.log(colors[color](text))\n}\n\n// do the magic\nconst setup = (err, result) => {\n  if (err) msg(`red`, err)\n\n  msg('yellow', 'WordPress configuration values ☟')\n\n  for (let key in result) {\n    msg('yellow', `${key}: ${result[key]};`)\n  }\n  // run our check-install script.\n  bash(`${process.cwd()}/scripts/check-install.sh`)\n  .then(ok => {\n    // add our project to hostfile\n    bash(`nanobox dns add local ${result.name}.local`)\n  })\n  .then(ok => {\n    // explain the next step\n    msg('green', `Run npm start, then finish setting up WordPress at ${result.name}.local/wp-admin`)\n  })\n}\n\nmsg('green', 'Making Progress!')\nprompt.start();\nprompt.get({\n  properties: {\n    name: {\n      description: colors.magenta("Project name:")\n    }\n  }\n}, setup);\n
\n

This will open a CLI asking for the name of the project, run the check-install.sh script, create the hostfile line for our local DNS at <project-name>.local, and log the next action that you need to take to finish installing Wordpress.

\n

Lets take a peek at our check-install.sh file:

\n
# check-install.sh\n#!/bin/bash\necho 'Check to make sure wordpress is here at all'\nif test -d ./wp/\nthen\n  echo 'yup we good'\nelse\n  echo 'nope we need that'\n  degit git@github.com:nanobox-quickstarts/nanobox-wordpress.git wp\nfi\nrsync -va --delete ./plugins/ ./wp/wp-content/plugins/\nrsync -va --delete ./theme/ ./wp/wp-content/themes/my-theme\n
\n

Very similar to prestart! The biggest difference is the bit where we use degit to clone Nanobox's official Wordpress repo into our untracked /wp directory. Degit will only get the head files, and none of the git history. Nor will it keep the .git file, basically making this a super clean, super fast way to download a directory of files. It's great. The last thing this does is wipe out any themes or plugins that we don't want our need in the core files and syncs out own tracked directories to the correct places in the Wordpress core file structure.

\n

Now would be a time to talk about plugins.

\n

What's up with plugins?

\n

Wordpress has a million plugins. We're going to focus on some of the basic ones that almost every Wordpress project ever needs, and should honestly be part of Wordpress. Building sites without these is a pain. Here they are:

\n
📁 /project-name\n⮑ 📁 plugins\n  ⮑ 📁 advanced-custom-fields-pro\n  ⮑ 📁 custom-post-types-ui\n  ⮑ 📁 timber-library\n  ⮑ 📁 wp-migrate-db-pro\n
\n

There are a couple more in my repo to do things like order posts in the CMS and import CSVs. Not super necessary, so we won't talk about theme here.

\n

Advanced Custom Fields

\n

ACF is a staple of Wordpress development. It lets us define new key/value pairs to extend the data model of things like posts and pages, and allows us to create a set of global variable available from anywhere. Sounds simple, surprising it's not part of Wordpress.

\n

Custom Post Types UI

\n

CPT-UI creates an interface in the admin panel for creating new post types. Out of the box, Wordpress comes with Posts and Pages. CPT-UI lets us build new types like Projects or Case Studies or whatever need for our data model. Again, surprising that this isn't just part of Wordpress. C'est la vivre.

\n

WP Migrate DB

\n

Migrate DB lets us ... migrate ... our ... DB. This gives us the ability to sync our databases across environments and get media uploads and things without needing to write magic MySQL queries while tunneled into open database ports on virtual machines. This is better. Believe me.

\n

Timber

\n

The Timber library from Upstatement is the greatest thing to happen to Wordpress development, after those plugins that should just be part of Wordpress. Timber introduces the concept of layout templates to Wordpress. This lets us write PHP to manipulate data, and pass that data to a template file where we can write Twig templates rather than composing strings in PHP. Basically ...

\n
<?php echo $myvar ?>\n
\n

Turns in to:

\n

{% raw %}

\n
{{ myvar }}\n
\n

{% endraw %}

\n

This lets us write templates with a templating language, and write server-side business logic in a server-side programming language. Truly revolutionary.

\n

What we talk about when we talk about Wordpress development: or, The Theme.

\n

With all this initial work around Wordpress core, development environments, and a basic plugin ecosystem in place we can start talking about the good stuff: the theme!

\n
📁 /project-name\n⮑ 📁 theme\n   ⮑ 📁 es6              # Source JS\n   ⮑ 📁 scss             # Source SCSS\n   ⮑ 📁 routes           # PHP route logic files\n      ⮑ 📄 index.php\n      ⮑ 📄 page.php\n      ⮑ 📄 post.php\n   ⮑ 📁 views            # Twig templates\n      ⮑ 📁 layouts\n      ⮑ 📁 pages\n      ⮑ 📁 partials\n   ⮑ 📄 functions.php    # This includes routing.\n   ⮑ 📄 screenshot.png   # Theme preview image.\n   ⮑ 📄 index.php        # Need this, but it's empty.¯\\_(ツ)_/¯\n
\n

We won't get too deep into this, since we're getting into more conventional territory here. Basically our es6 directory holds source JS that will get compiled into a bundle. Same with the scss directory, which gets compiled into css. We handle that with npm scripts in the package.json.

\n
# package.json\n{\n  ...\n  "scripts": {\n    ...\n    "css": "node-sass ./theme/scss/style.scss theme/style.css --watch",\n    "js": "rollup -c -w",\n    ...\n  }\n  ...\n}\n
\n

Hopefully none of this is to unusual — if it's is I recommend reading Paul Pederson's excellent article on npm scripts.

\n

There is one part of this I want to touch on before moving on:

\n
# package.json\n{\n  ...\n  "scripts": {\n    ...\n    "sync:plugins": "rsync -va --delete ./plugins/ ./wp/wp-content/plugins/",\n    "sync:theme": "rsync -va --delete ./theme/ ./wp/wp-content/themes/fuzzco",    \n    "watch": "rerun-script",\n    ...\n  },\n  "watches": {\n    "sync:plugins": "plugins/**/*.*",\n    "sync:theme": "theme/**/*.*"\n  },\n  ... \n
\n

This bit sets up a watcher on our theme and plugins directory, which sync our tracked working files to the correct place in our Wordpress core file structure.

\n

Functions, Routes, and Views

\n

The last thing I want to touch on is the basic structure of using Timber to match routes with views.

\n
/** functions.php */\nRoutes::map('/', function($params){\n  Routes::load('routes/page.php', $params, null, 200);\n});\nRoutes::map('/:page', function ($params) {\n  $page = get_page_by_path($params['page']);\n  if ($page) {\n      Routes::load('routes/page.php', $params, null, 200);\n  } else {\n      Routes::load('routes/404.php', $params, null, 404);\n  }\n});\nRoutes::map('/blog/:post', function($params){\n  Routes::load('routes/post.php', $params, null, 200);\n});\n
\n

These are Timber routes defined in the functions.php file. This replaces the standard routing of Wordpress, and we have change the structure of the Wordpress permalinks to anything other than the default to have it work. This is documented in Timber.

\n

When our server gets a request at a route of /page-name, it will call the page.php file and pass it the params associated with the route.

\n
/** page.php */\n<?php\n  $context = Timber::get_context();\n  $post = new TimberPost();\n  $context['page'] = $post;\n  \n  Timber::render( array(\n    'views/pages/page-' . $post->post_name . '.twig',\n    'views/pages/page.twig'\n  ), $context );\n?>\n
\n

The page.php file assigns some variables, interacts with Wordpress to get and shape our data, and then renders the twig file associated with the page. In this case, it's going to see if there's a template that matches the name of our page, otherwise it will render the default page template.

\n

Back to the beginning

\n

You've built your theme! Maybe it's a simple hello world, maybe it's a heavy duty big ol' thing. Either way, it's time to deploy.

\n

You can use Nanobox to create a droplet for your server. Nanobox will give your project a name in their system, and expose the URL for the server at <your-project>.nanoapp.io. I like to use the convention project-dev, project-stage, and project-prod. Once you create your project in Nanobox, the hard part is over and you can let them do the heavy lifting:

\n
$ nanobox deploy project-dev\n
\n

Or we can map this to our NPM script:

\n
$ npm run deploy:dev  \n
\n

This will containerize our application, push it to our droplet, hydrate the entire thing, and serve! Now we can use Migrate DB to move our database around, and we're in business.

\n

Putting it all together

\n

The project repo is a turnkey, ready to roll version of all the above. It contains all the tooling needed to get started, and if you've followed along with this guide, you should be able to get started in no time.

\n

As always, feel free to reach out to me in your venue of choice to talk about any of this — I would be happy to help you set this up for your own Wordpress project!

\n"},{"meta":{"title":"Soft Proof","slug":"soft-proof","date":"2017.10.29","description":"

Translations and compromises in image making or; the Image Cult Society.

\n","collection":"texts","timestamp":1509260400000},"content":"

There's an interesting thing that happens when a new idea or technology gets introduced then quickly assimilated into the background hum of our daily lives. It starts out with a discreet name — a clear identifier of what this thing is and means. Than this name just sort of ... slips away. It becomes so normal that to name it would seem strange. Its original name doesn't seem to fit any more, as the name existed in the first place to demarcate the new thought from the ordinary. And now the new thing is just ordinary. Think about Google Maps. It's just ... a map. In 2005, when Google Maps was first released, it's particular approach to the interface of a digital map was called a 'slippy map'. Weird, right?

\n

This is an interesting phenomenon around cultural approaches to technology, but not actually what I want to talk about. I want to talk about soft proofs. Soft proofs are an example of this taken to an extreme — you use them every day but you have probably never heard of them. There is no need for the soft proof to be something other than normal, the soft proof just is normal. But what is a soft proof, and why is it so normal? And why do I want to explore a topic so quotidian that the word used to mark it as interesting is so faded and worn?

\n

A soft proof is a way of viewing an image before the image has been reproduced mechanically. In contrast to the soft proof is the hard proof: a way of viewing an image immediately after it's been reproduced mechanically. Basically, a soft proof is an image on a screen that will be sent to a printer. Otherwise known as an image. It's need for a discreet name seems so unnecessary that it seems bizarre to refer to all images - even this text as I write it – as soft proofs. But that is, in essence, what they are. We see images on our screens that an be reliably turned into images on other peoples screens, and even into physical images on paper.

\n

The reason why this needed a name to demarcate it as special — during the advent of the digital — is that this is a really hard problem to solve. There are a range of mathematical models for approaching a relatively unified theory of color and vision, and a wide range of physical pieces of machinery that are tasked with producing those images — from printing presses to monitors. The act of ensuring an image can be predictably reproduced is necessarily an act of translation. Translating from this color space to that; from an additive color model of a screen to the subtractive color model of ink and paper; approximating the color of a paper stock to be printed on.

\n

This translating process is done using something called a Color Profile. A Color Profile is a set of rules for ensuring that an image created with red, green, and blue light can be replicated on off-white paper using cyan, magenta, yellow, orange, and green inks. The current workflow of digital to print is so smooth, so ubiquitous and mundane, as to occlude the massive technological feat that it supports it.

\n

This feat was undertaken by a small group of technology companies I the early 90s, and they collaborated to define a universal standard of how this would work.

\n
\n

The International Color Consortium was formed in 1993 by eight industry vendors in order to create an open, vendor-neutral color management system which would function transparently across all operating systems and software packages. . . . The eight founding members of the ICC were Adobe, Agfa, Apple, Kodak, Microsoft, Silicon Graphics, Sun Microsystems, and Taligent.

\n
\n

- Color Science History and the ICC Profile Specifications, Elle Stone

\n

The current, baseline profiles built around RGB and CMYK came about with the rise of digital image-making, which is the basis of the the current world around us, a world built on and predicated by images.

\n

The dominant translation is dominant because it — to a large degree — works. Creating color profile is really hard, mathy, phsysicsy stuff. It's hard to do yourself. But CMYK/RGB cuts the corners off the world to make it fit into a gamut that can be handled. But by necessity it's a compromise: what colors are we not handling in order to handle the maximum number of colors? What parts of the color space get left behind?

\n
\n

Many of these issues give me the feeling at times of reluctant rather than open co-operation between some of the companies that created this standard. Having said that, there does seem enough information in the public standard (when combined with examining available existing profiles) to effectively and accurately characterize color profiles of devices and color spaces. I could imagine there being some poor results at times though, due to some looseness in the spec.

\n
\n

— What's wrong with the ICC profile format anyway?, Graeme Gill

\n

Its important to understand this compromise; understand how it works and what exchange we're making in the process. What are we giving up, and what are we getting in return?

\n

What are we leaving on the table? For example, photography (up until the 80's) calibrated for white people. African Americans and other dark skinned people photographed poorly. They were outside the color space. The story goes that school photos of interracial classrooms would have rows of perfectly exposed white kids, and voids where the black children should have been (Adamn Broomberg). More than this, the standards only changed with industry pressure from chocolate and furniture manufacturers — a realm of capital where the browns and blacks matter (Rosie Cima).

\n
\n

Film chemistry, photo lab procedures, video screen colour balancing practices, and digital cameras in general were originally developed with a global assumption of ‘Whiteness.’”

\n
\n

— Looking at Shirley, the Ultimate Norm: Colour Balance, Image Technologies, and Cognitive Equity, Lorna Roth

\n

With the creation of the RGB color space, with the creation of the ICC, we ceded the visual world to supermassive tech interests, much like we've ceded our privacy and personal data. In doing so we've inherently made the creation and dissemination of images into a tool for capital — one that supports dominant power structures.

\n

How do we understand the implicit, invisible, baked in assumptions of the soft proof? We can start by operating outside the parameters of the soft proof, recognize it as a tool to use or not use. The gap between the soft proof & the hard copy is measured in the gap between the tools used to plan & prepare versus the tools used to produce, and we can move in to this gap and inhabit it. We can create work here, and in doing so reclaim some of the space that we've given away.

\n

The Risograph, for example, has a toolchain for soft proofing, but the machine— through its high speed & low cost — also opens up the possibility of designing images through iterative hard proofs; blending the techniques go the modern digital print process with the classical analog ones.

\n

The web is a strange medium – a blending of soft and hard spaces. A plastique space, with plastic proofs and plastic copies. The same process of translation is at work — between the still & the interactive, flowing image. This is why showing comps & wireframes of websites to clients can be so tricky: our culture of image cult and technological process can elide the critical differences we sense as agents of these systems.

\n

This is a call for a Marxism of image making — to seize the means of production. To create radical images & tools that exist in the corners of the gamut and color spaces discarded by the soft proof. To understand that planning is not doing, and take control of our own visual languages.

\n"},{"meta":{"title":"How to Design While Developing","slug":"how-to-design-while-developing","date":"2016.5.15","description":"

Moving beyond the idea that the designer and the developer on a web project are different people, and that somehow those are different things.

\n","collection":"texts","timestamp":1463295600000},"content":"

For a long time, websites got made with one person designing how it should look and one person developing the code that made it look that way.

\n

A lot of times, this is still how things get done - one team makes static Photoshop comps, and hands them off to a team of developers who know stuff like whether React or Ember or Node or Ruby is the best thing. This can sometimes cause friction. The designer expects the website to look exactly like the comp, the developer writes a bunch of custom CSS and HTML to fit the design, and whoever needs to make sure the whole thing is WCAAG compliant spends weeks hating both of them. When the next comp comes down the line, it all happens again. For a big site, this leads to design drift, and a hugely tangled codebase that’s a nightmare to try and untangle.

\n

This splitting of systems is an artificial one that’s sustained by organizational assumptions: we need designers, and we need developers. The thing about Design though - capital D design - is that it’s simply a method of deciding on a structure to accomplish a purpose. The design tools and methods one uses to accomplish good design is always in tune with the thing being made. A building doesn’t go from painting to construction drawings, nor does a car go from modeling clay to racetrack. The clay and the paint are very useful steps to start the process of design. They help us to be creative and loose and explore new solutions to the problems at hand. This exploratory work helps us to understand how a thing will feel in the world.

\n

These initial drawings and sketches get translated into their final structures, and translation is a process that can enrich both what is being translated to as well as what is being translated from. Every precise moment may not align directly, since differences in context can have deep implications for meaning.

\n

We turn our drawings into objects with all the considerations of the final materiality present. We can’t ignore the shape of the engine or structural code requirements, although our models and paintings certainly can. Design is provisional until the point at which it exists in the world, and when talking about the web even this isn’t any sort of ending. A website is its own sort of thing, with its own structure and requirements that need to be present and known throughout the entire process of design.

\n

A drawing and a website will never look the same. This is mostly because a website isn't static. Because of this, what a website looks like is just a small piece of what a website is. A website is what it enables its users to accomplish, what its developers have to do to keep it stable and moving forward.

\n

The best way to meet our goals as people who make good websites is to focus less on those drawings right away. Instead we should think more about simplicity and elegance than the detail oriented perfection of a jpeg.

\n

Don’t get me wrong - the jpeg can be important. It shows us how we think we can solve our problems, a glimpse of how we want our website to looks and feel, and the tone we want to communicate. Through all this it needs to have to room to change and breath as it comes to life and become a real thing. Our drawings should not be on the level of “what does this look like” but “what problem does this solve and how”. At every step in the process we can work to make the real thing better, to solve new questions that arise as we move through the process of design / development – a process where there is no gap between those two ideas.

\n

To design a website is to develop it - and as we develop a website we are constantly making design decisions. A designer cannot abdicate their responsibility to design by saying “well, my jpeg looked great.” A developer cannot abdicate their responsibility to a codebase by saying “well that’s what they wanted in the comp.”

\n"},{"meta":{"title":"Building a Client Library for ArcGIS","slug":"building-a-client-library","date":"2015.3.09","description":"

Writing a wrapper client library to smooth out design weirdness at the API level leads to plenty of design thinking on the way things should be.

\n","collection":"texts","timestamp":1425884400000},"content":"

This year I built a JavaScript wrapper for Node and the Browser around the ArcGIS REST API to simplify working with that platform as a developer. This was as an exercise in API design, as well as making a tool that I wanted to use but didn't exist yet. The project is a bare-bones library to ease interactions with the ArcGIS REST API in JavaScript and Node apps.

\n

Sometimes – and for sure in this case – an API can be rough, built over time, and not provide the sort of logical models that work well with specific language environments. This was the case with the ArcGIS REST API that I was running in to. A lot of the decisions had been made over the course of years, and didn't translate very smoothly a language as young as Node.js.

\n

The first step was to figure out what problems I wanted to solve. A lot of my work with Esri PDX has been about content handling, and so this is where I started. Reading all the doc to get a big picture of what's going on with the API, and talk to everyone who's done work like this before to figure out what problems they needed to solve was the first step. From there I felt I had enough context and information to make the thing useful for more people than just me, and make sure that it was coherent with the underlying goals of the original API.

\n

This project works to simplify and unify the gap between the ArcGIS REST API and a contemporary Node application. This library is a UI in the most basic sense of the term — it provides an interface between the developer and the servers. That interface needs to be well designed and thoughtful in order to make the process as smooth, intuitive, and pleasurable as possible.

\n

One of the most important parts of the project is to provide developers with a way to access the ArcGIS platform without needing to architect their entire application around opinionated frameworks (like Dojo, for example). Though the library itself is written in ES6, it's distributed as plain, normal ES5 – both as a node package and a packaged bundle. This means it works both in Node and the browser, and has very few opinions on how it integrates with the rest of your app.

\n

Right now, the library wraps most of the basic platform content management and interactions - getting and editing users, organizations, and items. The Node ArcGIS Client Library is an open source project — so it's scope will increase as the community works to accomplish more goals and workflows.

\n

Setting up the client

\n

The first step in using the library is initializing the client with your target portal.

\n
var ArcGIS = require('arcgis')\nvar arcgis = ArcGIS()\n
\n

This sets up a default object for interacting with the API. This default is going to talk to ArcGIS Online as an anonymous, unauthenticated user. One can authenticate this client session as a named user by passing in a user token obtained from a successful OAuth login process.

\n
var arcgis = Arcgis({\n  token: namedUserToken\n})\n
\n

This isn't exclusive to ArcGIS Online. The API for interacting with your organization's installation of Portal or Server is the same. Setting up the client session with your instance is done by specifying your API domain.

\n
var arcgis = Arcgis({\n  domain: 'ago.my-server.com',\n  token: namedUserToken\n})\n
\n

Beyond the initialization of the client, the library is exclusively async. All the functions return promises by default.

\n
function log (m) {\n  console.log(m)\n}\nfunction ohNo (err) {\n  return new Error(err)\n}\narcgis.request()\n.then(log)\n.catch(ohNo)\n
\n

You can also pass in a node-style callback, if you'd prefer.

\n
function log (err, results) {\n  if (err) {\n    return new Error(err)\n  } else {\n    console.log(results)\n  }\n}\narcgis.request({}, log)\n
\n

Both methods work just as well, and use all the same business logic. I like promises, but maybe you don't. This is one of the main reasons the library does its best to avoid inflicting my opinions on your codebase.

\n

Once we have an authenticated session, we can do all sorts of stuff — like figure out who we are:

\n
function hello (user) {\n  console.log('Hello, ' + user.firstName)\n}\narcgis.user('NikolasWise').then(hello)\n
\n

We can get all of the items that user has in the platform:

\n
function getContent (user) {\n  return user.content()\n}\nfunction logContent (content) {\n  console.log(content)\n}\narcgis.user('NikolasWise')\n.then(getContent)\n.then(logContent)\n
\n

Or a list of all the groups that a user is a member of.

\n
function logGroups (item) {\n  item.groups.forEach(function (group) {\n    console.log(group.title)\n  })\n}\narcgis.user('NikolasWise').then(logGroups)\n
\n

The library also can interact with the user's organization, returning information, members, or all the content associated with the organization.

\n
function logOrg (org) {\n  console.log(org)\n}\narcgis.organization('esripdx').then(logOrg)\n
\n

The organization call defaults to 'self' — whatever organization the current user is a member of.

\n
function getMembers (org) {\n  return org.members()\n}\nfunction log (members) {\n  console.log(members)\n}\narcgis.organization().then(getMembers).then(log)\n
\n

Many of the content calls are abstractions or helper methods for longer, more complicated calls to the search endpoint.

\n
function getContent (org) {\n  return org.content()\n}\nfunction log (items) {\n  console.log(items)\n}\narcgis.organization().then(getContent).then(log)\n
\n

In this way we are able to create a transitional layer between the API itself (a super complicated call to the search endpoint) and what application developers need (all my organization's content).

\n

Working with content

\n

Platform content is the weakest link in the library as of today. ArcGIS supports a huge range of item types, and quite a number of sophisticated things you can do with those item types. For now the basics are more or less in place — like getting an item's details:

\n
var layerID = 'a5e5e5ac3cfc44dfa8e90b92cd7289fb'\nfunction logItem (item) {\n  console.log(item)\n}\narcgis.item(layerID).then(logItem)\n
\n

Or updating the those details and editing the permissions:

\n
var layerId = 'a5e5e5ac3cfc44dfa8e90b92cd7289fb'\nfunction updateItem (item) {\n  return item.update({\n    snippet: 'Building footprints in my neighborhood in Portland, Oregon'\n  })\n}\nfunction shareItem (item) {\n  console.log(item)\n  return item.permissions({\n    access: 'public'\n  })\n}\narcgis.item(layerId)\n.then(updateItem)\n.then(shareItem)\n
\n

So far, there's some support for item-type-specific methods that are starting to open up the possibilities of manipulating your content from Node — like getting all the data in a layer.

\n
var layerID = 'a5e5e5ac3cfc44dfa8e90b92cd7289fb'\nfunction getData (item) {\n  return item.data()\n}\nfunction logData (data) {\n  console.log(data)\n}\narcgis.item(layerId)\n.then(getData)\n.then(logData)\n
\n

There is a lot more of the platform that we could cover than this - services, analysis, layer creation and tile publishing all are actions that this library or ones like it could cover.

\n"},{"meta":{"title":"Map as Context","slug":"map-as-context","date":"2015.2.19","description":"

Understanding maps as designed objects and attempting to define a theory for making digital maps on the internet as good as old paper maps.

\n","collection":"texts","timestamp":1424332800000},"content":"

Looking at maps as they exist today on the internet, we have a pretty solid idea of what that means. It means they look like Google Maps. This is a pretty recent design solution to the 'what is a map on the internet' problem, only about 10 years old. Which is old for internet, but pretty young for maps. The Google Maps model is a good one, too! It's a very effective way to present what is essentially a road map - a drivers atlas for navigating a city or a country. Google Maps replaces the AAA State Highway map really effectively, but perhaps there are some weak links with how it applies to other, less navigation-oriented maps.

\n

\n

\n \n \n \n
\n

\n

There a large number of really beautiful maps that exist only on paper, and a large number of really ugly maps that exist on screens. How can we start to think about maps in a way that bridges this gap? Is there a way we can approach these other, not-a-roadmap-map maps more effectively to make them as good as their paper-bound cousins?

\n

To approach this question from an angle, it's worth taking a moment to think about what a map is. The map is a miniature that pivots around the body to represent the gigantic enormity of the physical world. The map shrinks the world down to a place it can be held in the hands and entirely seen with the eye. The map connects the vastness of reality to the body in way that can be handled - both physically and mentally.

\n

This creates tension with maps on the screen - especially the internet. The screen can not be touched, and the internet can not be related to the body. Phones and tablets mitigate this by bringing the screen closer, and moving to the size of the hand, but the core difficulty remains – if a map exists to scale down the enormity of the world to the size of the body, the internet itself has no boundaries or edges, and way to relate the screen to the body.

\n

Why is this connection between world and body important? The map provides context for understanding world-scale systems and landscape-scale concepts in a human-scale object. The map is a typology of communication that sits half way between the book and the visual art object. Both the book and the painting – or the print – are techniques that are used to provide access to concepts and ideas beyond the scale of a single individual. The book can contain centuries of intellectual thought, the painting can expose feelings and emotions that touch any number of people. If the map exists between these two mode of communication, that means that it's goal is to use a volume of thought, of data, of measurements to expose a broad, underlying concept. This can be an environmental truth (or the supposition of one) or a societal insight.

\n

The map does this through a very specific set of visual design tools with formal qualities that lend themselves to the problem at hand. These formalities are partly defined and structured by the technologies behind the production and distribution of the map.

\n

The first maps where hand drawn, and correspondingly have attributes of other hand-made visual works. With the advent of printing, maps started to be carved into wood, and duplicated. After wood, they came to etching, and after that lithography. In each of the print techniques, certain marks are favored and made possible through the medium of the matrix itself. Shared in all the print techniques, however, is the concept of plates – individual drawn layers for different colors. Equating individual plates to individual colors to individual typological concepts being shown on the map is a big reason why printed maps are so good.

\n

The careful and deliberate application of the maps formal characteristics to directly address the ideas and concepts to be communicated, the address the use of the map is what makes a map good.

\n

On the internet, we make maps differently. GIS data sets mean that maps can be made through mathematical and analytic tools – comparing sets of data, and creating new sets of data to answer questions. A robust and open set of public data means that there are map making tools which provide ways to style and combine existing content.

\n

These techniques utilize a relatively static map that purports to usefully describe the entire geography of the planet. All of the maps made are intended to sit within larger application, itself designed to solve a problem.

\n

Most of the time, these maps fail to provide meaningful connection to the concepts provided - they lose the essential aspect of the map that join world-to-human scales, instead operating at the world-to-world level. The endless map of the internet is itself incomprehensible to the body. The maps of the internet are simultaneously too broad and too simple, providing too much and too little. The problem the map is presenting itself as a solution too is usually far too muddy, and the resulting lack of clarity of purpose leads to a map with itself a lack of clarity.

\n

Looking to the main purpose of the map — simplicity, clarity, and the miniaturization of the world to pivot around the human hand — while using the formal visual tools and design lessons of the previous several centuries of paper maps – the balance of simplification and exaggeration, clear conceptual separations, and embracing the limitations of the object to provide focus.

\n

In all, the map must be appropriate and natural for it's intended use, playing its role in the overall purpose of the design solution.

\n"},{"meta":{"title":"Pixels don't matter.","slug":"pixels-dont-matter","date":"2013.10.21","description":"

They don't.

\n","collection":"texts","timestamp":1382338800000},"content":"

Talking about mobile design today, the conversation is couched in terms of "pixel perfect", or designs are made by "pixel pushers". This isn't a useful way to approach layout design.

\n

In the analog world we talk about the three fundamental components of print.

\n

\n

\n \n \n \n
\n

\n

I know what you're thinking right now; "Wait, we're not analog here." We're digital. We need pixels. And that's true, we do need pixels. Let's reframe these principles:

\n

\n

\n \n \n \n
\n

\n

Thinking about our substrate, our materials are our devices, which render with pixels.

\n

But our materials change -

\n

\n

\n \n \n \n
\n

\n

Our devices get more and more pixel density, their proportions change, technology improves, CSS abstracts a 'pixel' of its own, Android exists, and all of a sudden our materials get away from us and focusing on the pixel gives us an aneurysm.

\n

But don't worry, because:

\n

\n

\n \n \n \n
\n

\n

We still have two other layers to work through.

\n

\n

\n \n \n \n
\n \nOur materials don't dictate our aesthetics nor or concept, but materials do inform our decisions. So think about pixels, but put them in the proper place - as the substrate of our work.

\n

Measurement systems are abstract ideas -

\n

\n

\n \n \n \n
\n \nAn inch is not a thing, a foot is not a thing. The pixel is a phsyical thing, and as a physical thing we've seen that it's prone to change.

\n

The pixel is a terrible way to measure things.

\n

\n

\n \n \n \n
\n \nSo use another system to measure, any other system. The ratios, the proportions, the relationships and the hierarchies are important. That's where we do our work as designers.

\n

So don't stress pixels. Spare them a thought when you get started, and know that at some point in the process they'll make themselves apparent, and will need to be addressed. But the pixels will let you know, and when you need to deal with them they'll be there.

\n

Show Time

\n

These are some comps for the MapAttack mobile app.

\n

\n

\n \n \n \n
\n

\n

Every element is specified by exact pixel counts. The typefaces are defined, the margins, the padding, and the border strokes are each precisely determined and labeled.\nDon't get the wrong idea - this isn't my hypocrisy you're seeing. These designs didn't start life here, this is just where they ended up.

\n

Before handing off my design, I basically ran pre-press on it. Our developers do need pixels - they need these values. They need them for execution and production. So we, as their designers, need to give them pixels. But we also need to give them more than just that.

\n

\n

\n \n \n \n
\n

\n

We need to give them the blueprints for how we think about the design. That means showing what's proportional as proportion, showing what's fluid as fluid. This gives our static comps the life they need to work for a huge range of devices.

\n

Love the Pixel, let it be a Pixel.

\n

Letting Pixels be beautiful, blocky, physical little things means using them to render our designs, not dictate them. It means using actual measurement systems to measure. And more importantly, it gives us a way to understand our layout better, to arrive at more considered designs, and to understand our designs as they live on actual hardware in the actual world.

\n"}] \ No newline at end of file diff --git a/src/data/texts/latest.json b/src/data/texts/latest.json index 72d981d..4ee7567 100644 --- a/src/data/texts/latest.json +++ b/src/data/texts/latest.json @@ -1 +1 @@ -{"meta":{"title":"Factorial! Race!","description":"

Finding an upper limit on factorials in JavaScript

\n","date":"2021.06.07","slug":"factorial-race","collection":"texts","timestamp":1623049200000},"content":"

For the past while now, I've been tinkering on a side project that builds and graphs arbitrary probability distributions created by dice rolling outcomes. It's extremely niche and dorky, but it's a been a really fun way to explore both product design development and new concepts in math and programming that have otherwise never presented themselves during my career.

\n

This is an article about the second bit.

\n

One of the interesting things that I discovered early on was that when adding the ability to multiply and reduce dice rather than just multiply or reduce dice (ie; roll 1d6, roll the resulting number of dice, on a 4 or higher roll again, sum the total results. Complicated!) the distributions are not normal, which means in order to actually graph the distribution we need to calculate every possible outcome. Not a big deal, since computers are good at this sort of stuff!

\n

However, I quickly discovered an upper bound: the calculation requires working with factorials. When the factorials get big, JavaScript gives up and returns Infinity. This is because there is a maximum limit to the size of a double-precision floating-point number that JS uses for the Number type. Wow, this got both mathy and programmery really quickly.

\n

This gave us an upper bound of 170!, since the rest of the distribution calculations don't like it when you pass them Infinity.

\n
const factorial = (x) => x > 1 ? x * factorial(x - 1) : 1\nfactorial(170) // 7.257415615307994e+306\nfactorial(171) // Infinity\n
\n

Lucky for us, JavaScript has implemented a Big Integer type for integers that are … well … big! I was able to refactor my factorial function to use BigInts.

\n
const big_factorial = (x) => x > 1 ? x * big_factorial(x - BigInt(1)) : BigInt(1)\nbig_factorial(BigInt(171)) // 1241018070217667823424840524103103992616605577501693185388951803611996075221691752992751978120487585576464959501670387052809889858690710767331242032218484364310473577889968548278290754541561964852153468318044293239598173696899657235903947616152278558180061176365108428800000000000000000000000000000000000000000n\n
\n

So what's our new upper bound? We can handle 170! easily, how high can we go? 1,000!? 10,000!?

\n
big_factorial(BigInt(1000)) // 402387260…0n\nbig_factorial(BigInt(10000)) // RangeError: Maximum call stack size exceeded\n
\n

Turns out 1,000! is a walk in the park. 10,000! gets a little more interesting! The error that returns on the function is about too much recursion. We're calling big_factorial from big_factorial ten thousand times and the browser thinks this means something is wrong, so it bails out on the process.

\n

So, what if we refactor our recursive big_factorial to use a loop?

\n
const big_fast_factorial = (x) => {\n  let r = BigInt(1);\n  for (let i = BigInt(2); i <= x; i++)\n    r = r * i;\n  return r;\n}\nbig_fast_factorial(BigInt(10000)) // 284625…0n in about 70ms\n
\n

10,000! is fast! We can get reliably get the result of that in less than 100ms. And since our loop will run as long as it needs to, our upper bound should now be based on compute and return time, rather than type errors or browser guardrails. Lets see what we can do now:

\n
big_fast_factorial(BigInt(20000)) // ~300ms\nbig_fast_factorial(BigInt(30000)) // ~650ms\n// …\nbig_fast_factorial(BigInt(90000)) // ~7238ms\nbig_fast_factorial(BigInt(100000)) // ~9266ms\n
\n

Things … start to get slow above 30 or 40 thousand factorial. Every additional ten thousand to our initial number adds more and more time to the compute function. Im sure theres some fancy O(n) complexity notation to express this, but I don't really want to figure that out. It's too slow to use an in a UI above say, 50,000!.

\n

Turns out tho, even mathematicians don't really calculate factorials this big. They use Stirlings' Approximation instead, since it's faster and "good enough". It looks sort of like this:

\n
𝑒ln(𝑛!) = 𝑛!\nwhere\nln(𝑛!) = ∑𝑛𝑘=1(ln𝑛)\n
\n

It would be pretty cool to do this in JavaScript! And personally, I love "good enough". I've already got a handy function for running Big Sigma calculations:

\n
const new_arr = (s,n) => {\n  if (s < 0 || n < 0) { return []}\n  return Array.from(Array(n+1).keys()).filter(v => v > s-1)\n}\n\nconst Σ = (min, max, fn) => new_arr(min, max).reduce((acc, val) => fn(val) + acc, 0)\n
\n

So lets try this out:

\n
const log = (x) => Math.log(x)\nMath.exp(Σ(1, 1000000, log)) // Infinity\n
\n

Oh no! The end result of our 1,000,000! function is still Infinity. Thats because one million factorial is … very big. It could still fit into a BigInt, but then we have another problem: we cant run Math functions on the BigInt type. And we can't rewrite the functions to use BigInts because the type is, by definition, only for integers. and 𝑒 is definitely not an integer. Even a math library like math.js has the same issues around typing, despite trying to account for it.

\n

Naturally, this leads to a simple proposal: FaaStorials! Fast Factorials as a Service! Since factorials are immutable, it should be possible to store the first 1,000,000 or so in a database, and provide an API for querying and returning them. Even a slow network request would be faster than computing the factorial locally. It should be possible to crunch (slowly) all the factorials, and write them to a database for retrieval on demand. I wrote this function and got about 7,000 rows written before I realized it would probably be expensive.

\n

According to my rough estimating, 1,000,000! would send a response that weights about 700kb, and the whole database would be in the neighborhood of 350gb. This would cost me about $80 a month to store, maybe $100 a month to pay for the requests as well. I pulled the plug on the script.

\n

As with many problems, the upper bound ends up being defined by time and money, the end!

\n"} \ No newline at end of file +{"meta":{"title":"The Shape of the Problem","slug":"shape-of-the-problem","date":"2021.06.29","description":"

Ontological data structures, real-time editing, and what a web app really is man.

\n","type":"text","collection":"texts","timestamp":1624950000000},"content":"

At work the other day I was thinking about a problem: Websites as User Generated Content, a part of the business I've been low-key thinking about for nearly a year. A chance conversation with Reuben Son about Vox's content editing tool Kiln altered my perspective on who a user is when thinking about UGC. For them, their users are their editors and authors. Kiln works by loading an editor interface directly over the rendered web page, and allows for editing of any portion of that webpage.

\n

It occurred to me that one could use a real-time NoSQL database like FaunaDB or Firebase to store a document model, run an app that subscribes to changes to this database and renders the document model into the DOM, than do the same bit where a editor lays over the page and allows for editing, posting changes directly to the NoSQL database. The resulting update would re-render the document for the editor, and anyone else also currently viewing the app. This would look like Squarespace, but be naturally multi-tenant. After editing, a static site could be generated and hosted on s3 to serve to a larger audience. Questions around draft states, not publishing another users edits, and other logistical things started to crowd my mind, but the core idea was interesting enough for me to decide to put a prototype together.

\n

Unfortunately (well…) before I could get to it, I was dropped in cold to a meeting about an impending migration of our knowledge base. We have hundreds of articles with URLs that are going to get 100s of new URLs and we have to update them across out entire application, hopefully also solving the problem for good and never having to do it again. Since our larger content strategy was at the forefront of my mind, I pitched creating a Knowledge Organization System that associates a UUID with an article URL, then consume that UUID in applications and not worry about the URL. Front the whole thing with a content management system and our support team can update article URLs whenever and it's never a problem\nagain.

\n

Thats when I realized that these two problems are the same problem. Both have the same set of concerns and desired behaviors a collection of structured data documents that support CRUD operations, paired with a visualization of the. document. So:

\n\n

Rephrased, the components are:

\n
    \n
  1. The Ontology / Structure
  2. \n
  3. The Editor Interface
  4. \n
  5. The Rendered Form
    \na. A Dynamic form
    \nb. A Static form
  6. \n
\n

Note that the Dynamic form is not necessarily a server or client-side rendered experience, but a way of seeing the changes you're making before committing to a production state. The Static form then is not necessarily a static asset (it can be server or client side generated from a database) but denotes a stable production state as viewed by a audience.

\n

For the knowledge base, the ontology is the index of articles. For the UGC websites, it is content and component structure. Thee editor interface is a CMS, either a product or like Kiln, something in house that sits on top. The rendered form for the sites is a static html build, for the knowledge base it's 301 redirect service.

\n

This is super abstract on purpose! I think this is the core structure of all web apps. Since this structure is so general as to be mostly useless (maybe interesting as a teaching tool), the value of a web app typology must come from taking an opinionated perspective on all three of the above points. This can be used as a tool to examine and critique popular web app typologies to validate the concept; how does a given typology express its disposition and behaviors in each component?

\n

Our first typology for analysis is the "Static Site" – an unchanging directory of html files generated at build time from a codebase, largely on a developers local machine. The ontology of many popular static site generates is the local file system, and attendant metadata. The editor interface is the developers command line and the source code repository (git really is an excellent content management system). The dynamic form is a dev server that runs locally and re-renders changes in real-time, and the static form is the built source code output to static assets.

\n

Another popular typology is the "Wordpress Site". A classic in web development. Here, the ontology is a MySQL database who's structure is invented ad-hoc by the developer. The editor interface is a PHP (and React I guess with Wordpress 5) web application that allows for manipulation of the MySQL database. The dynamic form is the "preview" of database changes or a "staging" environment for code and data, whole the static form is determined by PHP templates that fetch data form the database and interpolate pages at run time, on each request.

\n

The "Shopify" is structurally the same as the "Wordpress", but swap PHP for Ruby like a good early 2000's startup and make the Ontology a pre-determined e-commerce structure. I think this is the dominant web app disposition, with a range of opinions on how much should happen on the server and how much should happen on the client.

\n

Two other typologies I think are worth exploring, the "Notebook" and the "Data Driven Web App". The Notebook, like Observable positions the ontology as an unstructured document. The editor interface is a word processor app for that document, and the rendered form is a combination of pre-set app framework and the contents of the document. Dynamic form is a draft state of the document, static form is a published state. Notebooks are very interesting and different, and Observable is a great example of one. Since we're in the neighborhood of Mike Bostock, let's talk about "Data Driven Applications". The ontology is the data doing the driving. The rendered form is source code for visualizing the data — dynamic locally run while editing code, static is hosted on a server or cdn. The editor interface is relinquished to whatever real-world process governs the collection of the data.

\n

Each typology is a powerful conception of what a web app can be, and each one has a unique and distinct perspective on the three important parts of what a web app is. No one is better than the others, since they each have a different relationship. And a different definition of good.

\n

The shape of my two problems that were actually one problem gave me an idea for a new kind of web app typology, one that borrows from the Semantic Web and real-time web apps. The dispositions and technical behavior of the app would look something like this:

\n

1: The Ontology

\n

This kind of app will use off-the-shelf RFDA or JSON-LD ontologies. These ontologies can be extended or created, but must be valid RDFA or JSON-LD (either is fine, since they can be machine translated into each other). This allows for deeply semantic structures, machine readable relationships, and lossless data transfer between systems. Structuring this data as a graph allows for narrow, tailored consumption of the dat via GraphQL without dedicated API development and maintenance. It also allows for the entire data system to be visualized.

\n

Using these semantic ontologies rather than providing a blank slate and letting the data structures grow on an ad-hoc basis also saves a lot of time in creating the data models; since it's all semantic and robot friendly a single URI to the ontology should be enough to populate all the content models any given editor interface might need.

\n

2: The Rendered Form

\n

The guiding principle of thee rendered form is to be as light as possible over the wire, for both the dynamic and static forms.

\n

The rendered form splits the difference between a JAMstack real-time application and a static site. The dynamic form is hosted as a web app, and subscribes to real-time changes in the ontology documents. As the data changes, the dynamic form updates to reflect it. This takes the development build off of the local machine and puts it where more than one person can see it at a time. The static form is a built collection of static html files that an be hosted from any CDN to the wider audience.

\n

3: The Editor Interface

\n

The editor is completely separate from the renderer, but a common protocol unites it with the renderer on one end and the ontology on the other. Load the editor on any given dynamic rendered page to get an experience where any given property of the ontology can be edited, written to the database, and the effects seen immediately in real-time by anyone connected to the dynamic render. The diffs can then be stashed, discarded, or published.

\n

Conclusion

\n

The end result of an app like this would be an experience sort of like Glitch, sort of like Squarespace and sort of like Observable.

\n

With permissions around what rendering or editing app can see or touch what in the database, its possible and even important that any given concept that relates to the system can be represented in the system — either the data itself of a metadata record that is indexical to the data. This allows the entire system to be meaningfully connected, which enables solutions for many common problems (uri mapping, incompatable data structures, duplicated databases, nightmare migrations, vendor lock in) and opens the door to new use cases and implementations, like bespoke CRMs seamlessly integrated into a product or an ecosystem of editing experiences that are completely independant of any given renderer or ontology.

\n

A clumsy handle for this kind of app could be a Semantic Mono-Database. Not as catchy as JAMstack, SPA, or Static Site. We'll get there tho, I'm sure a meaningful name will present itself as I work to build the first real one of these things.

\n"} \ No newline at end of file