I am excited to finally share our recent paper "Filtering After Shading With Stochastic Texture Filtering" (with Matt Pharr, @marcosalvi, and Marcos Fajardo), published at ACM I3D'24 / PACM CGIT, where we won the best paper award! 1/N
@BartWronski@mtothevizzah@aras@demofox I'm jumping in clueless about motorcycles and who Wade is, but completely up to get on a train to Poland and join in 😊
I very much relate to missing inspiring discussions 💜
@jkaniarz@superfunc right! with the 16xAA coverage masks you can't do effects like that.
It's an Anti Aliasing technique that is really nice for "sharp" rendering of a glyph without needing to align it to the target pixel grid.
The approach expands on "coverage vs shade" and on how to do the samples, but still uses a glyph cache to store those values (i.e. pre-computed rasterisation of a glyph to a certain pixel target size).
So it's still a grid of values for coverage, not distance to edge.
I've had half a mind to read it properly and try it, but I actually haven't 😅
I saw it renders multi-color emojis, but I'm unsure what are the possibilities for effects.
A while ago I started to wonder why "bicubic" image filter inside #blender is exactly like that. And that led into a rabbit hole of trying to figure out what apps mean by "cubic"/"bicubic", and of course everyone means a different thing.
Recently I made a demo project on WebGPU, rendering the classic Sponza scene with shadows and cloth & water simulation. My goal was mostly to learn WebGPU, test if it is production-ready, and have some fun on the way.
Here's are my thoughts on WebGPU and the project itself ⬇️