@badlogic@mastodon.gamedev.place avatar

badlogic

@badlogic@mastodon.gamedev.place

libGDX, Beginning Android Games, RoboVM, Spine. Member of the council of Warios.

This profile is from a federated server and may be incomplete. Browse more on the original instance.

godotengine, to godot
@godotengine@mastodon.gamedev.place avatar

Seemingly, people still need to see proof.

Can everyone drop their 3D Godot games in the replies please? 🎤⬇️

badlogic,
@badlogic@mastodon.gamedev.place avatar
badlogic, to random
@badlogic@mastodon.gamedev.place avatar

Interesting article one the production of Air Head, a short film (?) supposedly created with OpenAI's SORA. Lots of struggles and manual work. Like, roto-ing the AI generated balloon head and replacing it manually.

(Sorry for the post storm. Gotta get my high quality Mastodon post quota filled for the month. I'm done :)

https://www.fxguide.com/fxfeatured/actually-using-sora/

badlogic, to random
@badlogic@mastodon.gamedev.place avatar

People of gamedev.place, I have a question: have any of you used generative AI in game production? I.e. to generate (production) "art", power conversational skills of NPCs, etc.

I can see how GenAI could be useful for blocking out things. I have yet to see a system that can generate content that doesn't require more work to get production ready than it would take to create manually.

Stuff like Unity's Muse Animate, Scenario, or (FSM forbid) Kaedim confuse me.

(InnoGames uses Scenario iirc)

image/jpeg

video/mp4

badlogic,
@badlogic@mastodon.gamedev.place avatar

Liz Edwards over on the site that shall not be named is an excellent artist and also an excellent source for Kaedim roasts.

badlogic,
@badlogic@mastodon.gamedev.place avatar

@bartholin LOD == level of detail.

As a 3D object gets further away from the camera, it gets smaller on screen, e.g. only covers 3% of the screen.

In that case, it makes no sense to render it at full fidelity. So you use a simplified mesh. You won't notice the difference at that size.

You ship a model at various such "resolutions". Each is one level of detail. You select the level of detail based on the distance to the camera. Helps improve performance, as you do less computations.

badlogic,
@badlogic@mastodon.gamedev.place avatar

@bartholin in the screenshot above, you see the LODs going from 95k "polys" to 75k "polys" to 55k "polys".

That's ... a silly step size. Usually, it's more like an order of magnitude reduction per LOD level. This indicates that Kadeim, the company offering AI generated 3D models, has no idea what they are doing (or preys on customers who are not knowledgable)

badlogic,
@badlogic@mastodon.gamedev.place avatar

@bartholin here's a visual example of what LOD levels usually look like.

badlogic, to random
@badlogic@mastodon.gamedev.place avatar

With Spine 4.2 done and released, I dorked off yesterday and built a Spine runtime for Android. It's a plain old Android View, using the Canvas API for rendering.

I'm lazy, so I went straight with the spine-libgdx runtime as the base. That's written in Java. All of it. And yet, this little thing screams (especially compared to the competition in the mobile app animations space, which is written in C++)

The renderer is 200 LOC, batches and doesn't generate any garbage that'd trigger stuttering.

image/jpeg
image/jpeg
image/jpeg

badlogic, (edited )
@badlogic@mastodon.gamedev.place avatar

ART, the VM running your Java/Kotlin code, has come a long way. Those little terrible loops scaling uvs and building up the colors "array" don't really make a dent in the frame time. Which is nice, cause they are very icky.

Sadly the Android Canvas API requires me to submit vertices in an unpacked way. Skia also does a copy natively. Still fast and efficient enough, easy to maintain, with great compatibility

With the PoC out of the way it shouldn't take long to get this into a shipable state.

badlogic, to random
@badlogic@mastodon.gamedev.place avatar

A few corps asked me to do a workshop with them on machine learning/large language models.

I put together a simple, extremely applied crash course on the basics of machine learning. All the materials are CC-BY-NC licensed. You'll need basic programming skills, but no previous ML experience or heavy duty math skills.

You'll learn what all the fuzz is about and how to build simple systems yourself. You only need a browser (can also run locally via Jupyter).

https://github.com/badlogic/genai-workshop/tree/main

badlogic,
@badlogic@mastodon.gamedev.place avatar

Happy to receive feedback/corrections.

badlogic, to random
@badlogic@mastodon.gamedev.place avatar

And another great read. This is a none trivial subject. The author mamaged to walk the reader through all abstraction levels beautifully imo.

Didn't know NVIDIA's Volta arch works as described.

https://www.collabora.com/news-and-blog/blog/2024/04/25/re-converging-control-flow-on-nvidia-gpus/

badlogic, to random
@badlogic@mastodon.gamedev.place avatar
badlogic,
@badlogic@mastodon.gamedev.place avatar

@fuchsiii Chucklefish iirc. I think exploring new tools is super valuable, even if you return to where you came from. Usually, you'll have a new perspective on your old state of affairs.

Console access for anything not sanctioned by the manufacturers has always been pain. See Godot,

badlogic, to random
@badlogic@mastodon.gamedev.place avatar

Every day I wait for the tram to get back home and have to look at this campaign poster by Austria's extrem right party FPÖ. It's AI generated.

Let's learn how to identify AI images using two recent posters by FPÖ.

1st image: Campaign poster for FPÖ's "Blue-Harry", supposedly a metal worker
2nd image: What Blue-Harry really looks like. He's a credit risk manager
3rd image: Poster for "Heimat Games", a Hitler-Youth inspired sujet for "homeland games"

Let's start with the 3rd image

image/png
image/png

badlogic,
@badlogic@mastodon.gamedev.place avatar

The generative image AI "classic" artifact: fucked up fingers.

  • Count them
  • Look for "melted" fingers
  • Look for weird finger poses (or weird poses of other limbs)

Easy

image/png

badlogic,
@badlogic@mastodon.gamedev.place avatar

Less obvious: the reflection on the cup isn't taking into account the actual surroundings.

These AI models lack a full "world model", meaning, they usually can't generate correct reflections. Light and shadow shenanigans can also give away AI generated images.

badlogic,
@badlogic@mastodon.gamedev.place avatar

There's more, but let's move on to AI-Blue-Harry. This one is a little trickier, since the sujet was choosen in such a way, that artifacts are less apparent.

Here's a trick: try to replicate the image in question with Midjourney, Stable Diffusion, or DALLE-E. That's what I did here (Prompt, Midjourney, DALL-E)

The image seems to have been generated by Midjourney. DALL-E is too smooth. Let's look for artifacts that also turn up in the Midjourney images.

image/png
image/png

badlogic,
@badlogic@mastodon.gamedev.place avatar

The apron makes zero sense physically.

AI-Blue-Harry vs. Midjourney

image/png

badlogic,
@badlogic@mastodon.gamedev.place avatar

It can be more subtle. Let's have a look at forehad wrinkles. They work if you don't look too closely. If you do take them in for more than a second, they start going into uncanney valley territory. That's because they too are physically foobar (and AI-Blue-Harry's skin around the wrinkles is too smooth)

AI-Blue-Harry vs. Midjourney

image/png

badlogic,
@badlogic@mastodon.gamedev.place avatar

Moving on to the other skin patches. Some generative models like Midjourney have tell-tale artifacts in skin areas. They may be too smooth (like seen on the Hitler-Youth poster). Or they can have weird wrinkles that look like finger prints or film grain.

AI-Blue-Harry vs. Midjourney.

Bonus content: Blue-Harry was obviously drunk when he picked his tattoo.

image/png
image/png

badlogic,
@badlogic@mastodon.gamedev.place avatar

And finally, muscle structure. For buff anatomies, AI models still suck very hard at getting all the dents and bumps right, especially for lower arm muscles. It's a bit less obvious here in AI-Blue-Harry's case, but you can easily see it on the IRL poster.

AI-Blue-Harry vs. Midjourney

image/png

badlogic,
@badlogic@mastodon.gamedev.place avatar

So the party that was founded by Nazis is now so ashamed of their own bodies, that they have to use generative AI to create their propaganda. Noted.

In case of AI-Blue-Harry, they not only want to follow some "ideal", but also want to imply that he's one of "us": a hard working metal worker. With a shitty Thor hammer and tribal tattoo. While the real "Blue-Harry" (a nick he hasn't had in previous campaigns) is a banker.

They are getting dumber by the minute.

badlogic,
@badlogic@mastodon.gamedev.place avatar

@Landa yep, all them Insta gym sharks.

badlogic,
@badlogic@mastodon.gamedev.place avatar

@rattenhirn yep, that's what I imagine a credit risk manager to look like.

badlogic,
@badlogic@mastodon.gamedev.place avatar

@graves501 oh, I can totally believe it. points at everything

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • rosin
  • thenastyranch
  • everett
  • DreamBathrooms
  • ethstaker
  • magazineikmin
  • cubers
  • Youngstown
  • tacticalgear
  • Durango
  • slotface
  • ngwrru68w68
  • kavyap
  • provamag3
  • osvaldo12
  • InstantRegret
  • cisconetworking
  • GTA5RPClips
  • modclub
  • tester
  • mdbf
  • khanakhh
  • normalnudes
  • Leos
  • megavids
  • anitta
  • lostlight
  • All magazines