@pekkavaa@mastodon.gamedev.place
@pekkavaa@mastodon.gamedev.place avatar

pekkavaa

@pekkavaa@mastodon.gamedev.place

An avid reader, computer graphics fan and atmospheric jungle beats enjoyer.

Demoscene: cce/Peisik.

This profile is from a federated server and may be incomplete. Browse more on the original instance.

pekkavaa, to GraphicsProgramming
@pekkavaa@mastodon.gamedev.place avatar

A new post: "Shader post-processing in a hurry"

How to improve the perceived image quality by avoiding basic mistakes.

https://30fps.net/pages/post-processing/

pekkavaa, to n64
@pekkavaa@mastodon.gamedev.place avatar

I experimented with subdividing a skybox quad by hand but keeping its UVs intact. Followed by baking a new small texture in the new vertex positions to give more texels to more detailed regions in the texture.

It produces an interesting result but I find the sharp details a bit jarring. Maybe it would work better as a subtle effect.

A skybox image comparison. Left: A quad with the baseline 44x44 texture. Right: The tessellated quad with the stretched UV tetxure. Gives more precision in details but introduces some artifacts.

pekkavaa,
@pekkavaa@mastodon.gamedev.place avatar

@aeva I went and checked out the stars in Wind Waker and yes sharp they are 😀 Much of the fun stuff in realtime graphics is on the tech art side.

froyok, to gamedev
@froyok@mastodon.gamedev.place avatar

It's December 31, I couldn't close this year without posting a new article on my website. ;)

I wanted to write a recap (and some additional thoughts) on all the work I did building my own game engine called "Ombre" over the past year: https://www.froyok.fr/blog/2023-12-ombre-dev-blog-1/

pekkavaa,
@pekkavaa@mastodon.gamedev.place avatar

@froyok Very interesting to see the progress in one place, for example I didn't know that people still do stencil shadows but with compute shaders. Your end results look really sweet!

And this is the first time I've heard of "Tony McMapface" 😅 Looks good though.

P.S. Gotta love that SDF axis widget.

Powersaurus, to random
@Powersaurus@mastodon.social avatar

Precomputing the dithered colours is faster than doing it at drawtime, who knew!? Now I have 16 colour ramps for light-dark for any colour picked as a wall colour.

Also playing round with actually editing the map and seeing what sort of wall patterns could be created.

Video of walking round a 3d raycaster environment to demonstrate different wall colours and a larger colour ramp used to do distance based shading.

pekkavaa,
@pekkavaa@mastodon.gamedev.place avatar

@Powersaurus That dark fog (darkness?) really gives it that classic Doom feel. There was also the trick of making walls that face certain directions subtly brigther to give an impression of lighting.

Might be hard to pull of with that palette though!

rygorous, to random
@rygorous@mastodon.gamedev.place avatar

Numpy arrays + dtype are fantastic for when you want to analyze a few million numerical datapoints and are so low friction they've largely displaced my use of CSVs for ad-hoc data dumps and analysis.

pekkavaa,
@pekkavaa@mastodon.gamedev.place avatar

@elduvelle @rygorous Yeah this is news to me as well. Super useful!

eniko, to random
@eniko@peoplemaking.games avatar

i got nerd sniped into messing with line rasterization again :|

pekkavaa,
@pekkavaa@mastodon.gamedev.place avatar

@eniko Congrats! I love that DX11 spec. I made a debug SVG writer for my triangle rasterizer inspired by the diagrams there.

I made it to debug a certain "tricky" triangle that made per-pixel deltas explode. After all that work, well I'm not sure I really needed a fancy SVG to spot the issue 😅

aras, to random
@aras@mastodon.gamedev.place avatar

Gaussian Splatting continues! This time, looking at clustering spherical harmonics for some size savings. "bike" and "garden" data fits under 100MB now, at quite acceptable quality! https://aras-p.info/blog/2023/09/27/Making-Gaussian-Splats-more-smaller/

pekkavaa,
@pekkavaa@mastodon.gamedev.place avatar

@aras Super neat. The results look good and I'm impressed you went and did proper PSNR comparisons as well. The data sizes come down real nice.

When it comes to scikit-learn, I figured you need to use MiniBatchKMeans with "random" initialization to make it tolerably fast for large inputs. I learned this when making a VQ video compressor. Still slow though.

JanOrszulik, to pixelart
@JanOrszulik@mastodon.gamedev.place avatar
pekkavaa,
@pekkavaa@mastodon.gamedev.place avatar

@JanOrszulik It's beautiful

blinry, to random
@blinry@chaos.social avatar

Last weekend, @winniehell and I released a new demo, with graphics made entirely from circles!

It won 4th place in the PC Demo compo at @evoke!

I just published a recording, hope you'll enjoy! 💖
https://www.youtube.com/watch?v=_ilDDiD-LRs

pekkavaa,
@pekkavaa@mastodon.gamedev.place avatar

@blinry @winniehell @evoke This was so great in the compo. The smooth audio cues added much more atmosphere than I would've expected from a demo with just 10 circles.

pekkavaa, to n64
@pekkavaa@mastodon.gamedev.place avatar

I made a weird Nintendo 64 demo for the @evoke demoparty Alternative Platforms compo and ranked 4th in the competition in the end.

Video capture: https://youtu.be/et2oI4jV78Y
Download and discussion: https://www.pouet.net/prod.php?which=94733

It's a pretty big ROM because I used uncompressed PCM audio for the music made by miika. Video takes about two megs. Made with the libdragon open source SDK with no proprietary Nintendo code in sight :)

pekkavaa, to random
@pekkavaa@mastodon.gamedev.place avatar

I played with differentiable cellular automata for this years Revision demoparty. My code is based on Mordnitsev et al. work, see https://distill.pub/selforg/2021/textures/

In this case we take a low-rez G-Buffer and run learned per-pixel update rules 64 times. Target is to make the end result look painterly and also upscale the input by 2x. Turned out "OK" but not 100% there yet.

Code: https://github.com/seece/SingleImageNCAFiltering

pekkavaa,
@pekkavaa@mastodon.gamedev.place avatar

Here's a somewhat cryptic video that shows how the system evolves.

There's a bilinear upscale halfway through after which a second learned CA continues the process. So the pipeline is G-buffer -> 32 iters of automaton #1 -> upscale+noise -> 32 iters of automaton #2 -> learned color conversion to RGB.

Luckily the μNCA paper (https://arxiv.org/abs/2111.13545) came with some code I could use directly for this :)

video/mp4

pekkavaa,
@pekkavaa@mastodon.gamedev.place avatar

@aeva Between 5-10 minutes on an RTX 3090. It's a bit longer than I expected tbh. I think it's because the "backprop through time" technique makes it effectively a 64 layer (the total number of iterations) convnet so it's pretty heavy.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • megavids
  • InstantRegret
  • magazineikmin
  • thenastyranch
  • modclub
  • everett
  • rosin
  • Youngstown
  • slotface
  • ethstaker
  • mdbf
  • kavyap
  • osvaldo12
  • DreamBathrooms
  • anitta
  • Durango
  • ngwrru68w68
  • tester
  • khanakhh
  • love
  • tacticalgear
  • cubers
  • GTA5RPClips
  • Leos
  • normalnudes
  • provamag3
  • cisconetworking
  • JUstTest
  • All magazines