nyakojiru,
@nyakojiru@lemmy.dbzer0.com avatar

shit is going too far, as excited expected, and governments give a fuck about societies. Only in the EU, there are a few human-like movements.

mindbleach,

I’m genuinely shocked there’s no show or movie made with AI yet. If you expect to put in “good movie five stars Oscar bait” and get two hours of footage, yeah, no. Obviously no. But real movies are made from small pieces. Existing public tech is enough to really half-ass your way around needing actors or locations. (Or Blender skills.)

But no - it’s always ‘look how neat this looks!’ and not ‘here is a story I’ve been wanting to tell.’

ThePowerOfGeek,

YouTube is about to get flooded by the weirdest meme videos. We thought it was bad already, we ain’t seen nothing yet.

eagles_fan,

This can only be bad for artists and if you are happy about it you are a fascist

kandoh,

I honestly can’t see this replacing anyone. This is the equivalent of stock footage. It’s just gonna replace Shutterstock I guess

Vex_Detrause,

Imagine VR giving an AI generated world. It would be a Ready Player One in irl.

AgentGrimstone,

I recently played a game where people found immortality and each individual just lived in their own personal virtual reality for thousands of years. It’s kinda creepy seeing the recent advances in technology today lining up to that, minus the immortality part.

nossaquesapao,

What game was that?

AgentGrimstone,

It’s a spoiler to reveal the game so…

SPOILER: Sorry, I don’t know how to do spoiler tags on this app but I’m referring to the antagonists in horizon forbidden west. Here’s another sentence just to help hide the game for anyone scrolling by.

Toribor, (edited )
@Toribor@corndog.social avatar

The compute power it would take to do that in realtime at the framerates required for VR to be comfortable for two separate perspectives would be absolutely beyond insane. But at the rate hardware improves and the breakneck speed these AI models are developing maybe it’s not as far off as I think.

Blue_Morpho,

An Ai generated VR world would be a single map environment generated in the same way you wait at loading screens when a game starts or you move to an entirely new map.

A text to 3D game asset Ai wouldn’t regenerate a new 3D world on every frame in the same way you wouldn’t ask AI to draw a picture of an orange cat and then ask it to draw another picture of an orange cat shifted one pixel to the left if you wanted the cat moved a pixel. The result would be totally different picture.

Toribor, (edited )
@Toribor@corndog.social avatar

I think we’re talking about different kinds of implementations.

One being an ai generated ‘video’ that is interactive, generating new frames continuously to simulate a 3d space that you can move around in. That seems pretty hard to accomplish for the reasons you’re describing. These models are not particularly stable or consistent between frames. The software does not have an understanding of the physical rules, just how a scene might look based on it’s training data.

Another and probably more plausible approach is likely to come from the same frame generation technology in use today with things like DLSS and FSR. I’m imagining a sort of post-processing that can draw details on top of traditional 3d geometry. You could classically render a simple scene and allow ai to draw on top of the geometry in realtime to sort of fake higher levels of detail. This is already possible, but it seems reasonable to imagine that these tools could get more creative and turn a simple blocky undetailed 3d model into a photo-realistic object. Still insanely computationally expensive but grounding the AI with classic rendering to stabilize it’s output could be really interesting.

dylanTheDeveloper,
@dylanTheDeveloper@lemmy.world avatar

Shit posting 2.0 is here fellas

paulzy,

I wonder if in the 1800s people saw the first photograph and thought… “well, that’s the end of painters.” Others probably said “look! it’s so shitty it can’t even reproduce colors!!!”.

What it was the end of was talentless painters who were just copying what they saw. Painting stopped being for service and started being for art. That is where software development is going.

I have worked with hundreds of software developers in the last 20 years, half of them were copy pasters who got into software because they tricked people into thinking it was magic. In the future we will still code, just don’t bother with the thing the Prompt Engineer can do in 5 seconds.

systemglitch,

I think that’s a bad analogy because of the whole being able to think part.

I’ll be interested in seeing what (if anything) humans will be able to do better.

General_Effort,

It was exactly the same as with AI art. The same histrionics about the end of art and the dangers to society. It’s really embarrassing how unoriginal all this is.

Charles Baudelaire, father of modern art criticism, in 1859:

As the photographic industry was the refuge of every would-be painter, every painter too ill-endowed or too lazy to complete his studies, this universal infatuation bore not only the mark of a blindness, an imbecility, but had also the air of a vengeance. I do not believe, or at least I do not wish to believe, in the absolute success of such a brutish conspiracy, in which, as in all others, one finds both fools and knaves; but I am convinced that the ill-applied developments of photography, like all other purely material developments of progress, have contrib­uted much to the impoverishment of the French artistic genius, which is already so scarce.


What it was the end of was talentless painters who were just copying what they saw. Painting stopped being for service and started being for art.

This attitude is not new, either. He addressed it thus:

I know very well that some people will retort, “The disease which you have just been diagnosing is a disease of imbeciles. What man worthy of the name of artist, and what true connoisseur, has ever confused art with industry?” I know it; and yet I will ask them in my turn if they believe in the contagion of good and evil, in the action of the mass on individuals, and in the involuntary, forced obedience of the individual to the mass.

InvaderDJ,

What it was the end of was talentless painters who were just copying what they saw. Painting stopped being for service and started being for art. That is where software development is going.

I think a better way of saying this are people who were just doing it for a job, not because of a lot of talent or passion for painting.

But doing something just because it is a job is what a lot of people have to do to survive. Not everyone can have a profession that they love and have a passion for.

That’s where the problem comes in when it comes to these generative AI.

Kedly,

And then the problem here is capitalism and NOT AI art. The capitalists are ALWAYS looking for ways to not pay us, if it wasnt AI art, it was always going to be something else

fidodo,

The hardest part of coding is managing the project, not writing the content of one function. By the time LLMs can do that it’s not just programming jobs that will be obsolete, it will be all office jobs.

anguo,

Her legs rotate around themselves and flip sides at 16s in. It’s still very impressive, but …yeah.

Marcbmann,

Wow didn’t see that the first time

fidodo,

This is a base model, just because it’s 90% there on its own doesn’t mean you can’t improve on it by adding extra safe guards. For example you can get LLMs to be more accurate by asking another LLM to proofread the work. I am frankly amazed that the base models are this good to begin with. I was totally expecting to need way more safeguarda from the get go, but we’re getting a lot even without them. But I fully expect there to be AI tools that are specialized to identify where the base model messes up and then corrects it.

mindbleach,

Got a laugh out of that, then remembered humans screw it up too. See Haruhi’s infamous “helicopter legs.”

anguo,

Couldn’t find anything with a quick search.

mindbleach,

Eh. Basically an anime did the same fuck-up for a pacing character… when the shot was a floor-level close-up of her legs.

gravitas_deficiency,

Ah yes, this definitely won’t have any negative ramifications.

/s

MonkderZweite,

I’m pretty sure that’s a model tho.

Flumpkin,

This is still so bizarre to me. I’ve worked on 3D rendering engines trying to create realistic lighting and even the most advanced 3D games are pretty artificial. And now all of a sudden this stuff is just BAM super realistic. Not just that, but as a game designer you could create an entire game by writing text and some logic.

fidodo,

Because it’s trained on videos of the real world, not on 3d renderings.

Flumpkin,

Lol you don’t know how cruel that is. For decades programmers have devoted their passion to creating hyperrealistic games and 3D graphics in general, and now poof it’s here like with a magic wand and people say “yeah well you should have made your 3D engine look like the real world, not to look like shit” :D

FatCrab,

Keep in mind that this isn’t creating 3d Billy volumes at all. While immensely impressive, the thing being created by this architecture is a series of 2d frames.

ArmokGoB,

In my experience as a game designer, the code that LLMs spit out is pretty shit. It won’t even compile half the time, and when it does, it won’t do what you want without significant changes.

kspatlas,

Chatgpt once insisted my JSON was actually YAML

jkrtn,

Technically it is, but I agree that is imprecise and nobody would say so IRL. Unless they are being a pedantic nerd, like I am right now.

DSTGU,

The correct usage of LLMs in coding imo is for a single use case at a time, building up to what you need from scratch. It requires skill both in talking to AI for it to give you what you want, knowing how to build up to it, reading the code it spits out so that you know when it goes south and the skill of actually knowing how to build the bigger picture software from little pieces but if you are an intermediate dev who is stuck on something it is a great help.

That or for rubber ducky debugging, it s also great in that

colonial,
@colonial@lemmy.world avatar

That sounds like more effort than just… writing the code.

DSTGU,

It s situationally useful

Flumpkin,

deleted_by_author

  • Loading...
  • Traister101,

    You should refine your thoughts more instead of dumping a stream of consciousness on people.

    Essentially what this stream of consciousness boils down to is “Wouldn’t it be neat if AI generated all the content in the game you are playing on the fly?” Would it be neat? I guess so but I find that incredibly unappealing very similar to how AI art, stories and now video is unappealing. There’s no creativity involved. There’s no meaning to any of it. Sentient AI could probably have creativity but what people like you who get overly excited about this stuff don’t seem to understand is how fundamentally limited our AI actually is currently. LLMs are basically one of the most advanced AI things rn and yet all it does is predict text. It has no knowledge, no capacity for learning. It’s very advanced auto correct.

    We’ve seen this kind of hype with Crypto with NFTs and with Metaverse bullshit. You should take a step back and understand what we currently have and how incredibly far away what has you excited actually is.

    HeavyDogFeet,
    @HeavyDogFeet@lemmy.world avatar

    I don’t mean to be dismissive of your entire train of thought (I can’t follow a lot of it, probably because I’m not a dev and not familiar with a lot of the concepts you’re talking about) but all the things you’ve described that I can understand would require these tools to be a fuckload better, on an order we haven’t even begun to get close to yet, in order to not be super predictable.

    It’s all wonderful in theory, but we’re not even close to what would be needed to even half-ass this stuff.

    nucleative,

    Welcome to the club my friend… Expert after expert is having this experience as AI develops in the past couple years and we discover that the job can be automated way more than we thought.

    First it was the customer service chat agents. Then it was the writers. Then it was the programmers. Then it was the graphic design artists. Now it’s the animators.

    Flumpkin,

    Yeah. And it’s not just how good the images look it’s also the creativity. Everyone tries to downplay this but I’ve read texts and those videos and just from the prompts there is a “creative spark” there. It’s not very bright spark lol but it’s there.

    I should get into this stuff but I feel old lol. I imagine you could generate interesting levels with obstacles and riddles and “story beats” too.

    Ultraviolet,

    Because sometimes the generator just replicates bits of its training data wholesale. The “creative spark” isn’t its own, it’s from a human artist left uncredited and uncompensated.

    Flumpkin,

    Artists are “inspired” by existing art or things they see in real life all the time. So that they can replicate art doesn’t mean they can’t generate art. It’s a non sequitur. But I’m sure people are going to keep insisting on this so lets not argue back and forth on this :D

    genesis,
    genesis avatar

    It seems to me that AI won't completely replace jobs (but will do in 10-20 years). But will reduce demand because oversaturation + ultraproductivity with AI. Moreover, AI will continue to improve. A work of a team of 30 people will be done with just 3 people.

    Traister101,

    Still waiting on the programmer part. In a nutshell AI being say 90% perfect means you have 90% working code IE 10% broken code. Images and video (but not sound) is way easier cause human eyes kinda just suck. Couple of the videos they’ve released pass even at a pretty long glance. You only notice funny businesses once you look closer.

    General_Effort,

    I can’t imagine that digital artists/animators have reason to worry. At the upper end, animated movies will simply get flashier, eating up all the productivity gains. In live action, more effects will be pure CGI. At the bottom end, we may see productions hiring VFX artists, just as naturally as they hire makeup artists now.

    When something becomes cheaper, people buy more of it, until their demand is satisfied. With food, we are well past that point. I don’t think we are anywhere near that point with visual effects.

    HeavyDogFeet,
    @HeavyDogFeet@lemmy.world avatar

    Writer here, absolutely not having this experience. Generative AI tools are bad at writing, but people generally have a pretty low bar for what they think is good enough.

    These things are great if you care about tech demos and not quality of output. If you actually need the end result to be good though, you’re gonna be waiting a while.

    NounsAndWords,

    If you actually need the end result to be good though, you’re gonna be waiting a while.

    I agree with everything you said, but it seems in the context of AI development “a while” is like, a few years.

    HeavyDogFeet,
    @HeavyDogFeet@lemmy.world avatar

    That remains to be seen. We have yet to see one of these things actually get good at anything, so we don’t know how hard that last part is to do. I don’t think we can assume there will be continuous linear progress. Maybe it’ll take one year, maybe it’ll take 10, maybe it’ll just never reach that point.

    sudoreboot, (edited )
    @sudoreboot@slrpnk.net avatar

    Yeah a real problem here is how you get an AI which doesn’t understand what it is doing to create something complete and still coherent. These clips are cool and all, and so are the tiny essays put out by LLMs, but what you see is literally all you are getting; there are no thoughts, ideas or abstract concepts underlying any of it. There is no meaning or narrative to be found which connects one scene or paragraph to another. It’s a puzzle laid out by an idiot following generic instructions.

    That which created the woman walking down that street doesn’t know what either of those things are, and so it can simply not use those concepts to create a coherent narrative. That job still falls onto the human instructing the AI, and nothing suggests that we are anywhere close to replacing that human glue.

    Current AI can not conceptualise – much less realise – ideas, and so they can not be creative or create art by any sensible definition. That isn’t to say that what is produced using AI can’t be posed as, mistaken for, or used to make art. I’d like to see more of that last part and less of the former two, personally.

    Flumpkin,

    Current AI can not conceptualise – much less realise – ideas, and so they can not be creative or create art by any sensible definition.

    I kinda 100% agree with you on the art part since it can’t understand what it’s doing… On the other hand, I could swear that if you look at some generated AI imagines it’s kind of mocking us. It’s a reflection of our society in a weird mirror. Like a completely mad or autistic artist that is creating interesting imagery but has no clue what it means. Of course that exists only in my perception.

    But it the sense of “inventive” or “imaginative” or “fertile” I find AI images absolutely creative. As such it’s telling us something about the nature of creative process, about the “limits” of human creativity - which is in itself art.

    When you sit there thinking up or refining prompts you’re basically outsourcing the imaginative visualizing part of your brain. An “AI artist” might not be able draw well or even have the imagination, but he might have a purpose or meaning that he’s trying to visualize with the help of AI. So AI generation is at least some portion of the artistic or creative process but not all of it.

    Imagine we could have a brain computer interface that lets us perceive virtual reality like with some extra pair of eyes. It could scan our thoughts and allows us to “write text” with our brain, and then immediately feeds back a visual AI generated stream that we “see”. You’d be a kind of creative superman. Seeing / imagining things in their head is of course what many people do their whole life but not in that quantity or breadth. You’d hear a joke and you would not just imagine it, you’d see it visualized in many different ways. Or you’d hear a tragedy and…

    sudoreboot,
    @sudoreboot@slrpnk.net avatar

    Like a completely mad or autistic artist that is creating interesting imagery but has no clue what it means.

    Autists usually have no trouble understanding the world around them. Many are just unable to interface with it the way people normally do.

    It’s a reflection of our society in a weird mirror.

    Well yes, it’s trained on human output. Cultural biases and shortcomings in our species will be reflected in what such an AI spits out.

    When you sit there thinking up or refining prompts you’re basically outsourcing the imaginative visualizing part of your brain. […] So AI generation is at least some portion of the artistic or creative process but not all of it.

    We use a lot of devices in our daily lives, whether for creative purposes or practical. Every such device is an extension of ourselves; some supplement our intellectual shortcomings, others physical. That doesn’t make the devices capable of doing any of the things we do. We just don’t attribute actions or agency to our tools the way we do to living things. Current AI possess no more agency than a keyboard does, and since we don’t consider our keyboards to be capable of authoring an essay, I don’t think one can reasonably say that current AI is, either.

    A keyboard doesn’t understand the content of our essay, it’s just there to translate physical action into digital signals representing keypresses; likewise, an LLM doesn’t understand the content of our essay, it’s just translating a small body of text into a statistically related (often larger) body of text. An LLM can’t create a story any more than our keyboard can create characters on a screen.

    Only once/if ever we observe AI behaviour indicative of agency can we start to use words like “creative” in describing its behaviour. For now (and I suspect for quite some time into the future), all we have is sophisticated statistical random content generators.

    EnderMB,

    Another programmer here. The bottleneck in most jobs isn’t in getting boilerplate out, which is where AI excels, it’s in that first and/or last 10-20%, alongside dictating what patterns are suitable for your problem, what proprietary tooling you’ll need to use, what API’s you’re hitting and what has changed in recent weeks/months.

    What AI is achieving is impressive, but as someone that works in AI, I think that we’re seeing a two-fold problem: we’re seeing a limit of what these models can accomplish with their training data, and we’re seeing employers hedge their bets on weaker output with AI over specialist workers.

    The former is a great problem, because this tooling could be adjusted to make workers lives far easier/faster, in the same way that many tools have done so already. The latter is a huge problem, as in many skilled worker industries we’ve seen waves of layoffs, and years of enshitification resulting in poorer products.

    The latter is also where I think we’ll see a huge change in culture. IMO, we’ll see existing companies bet it all and die from supporting AI over people, and a new wave of companies focus on putting output of a certain standard to take on larger companies.

    archomrade,

    This is a really balanced take, thank you

    sleepmode,

    After seeing the horrific stuff my demented friends have made dall-e barf out I’m excited and afraid at the same time.

    Carighan,
    @Carighan@lemmy.world avatar

    The example videos are both impressive (insofar that they exist) and dreadful. Two-legged horses everywhere, lots of random half-human-half-horse hybrids, walls change materials constantly, etc.

    It really feels like all this does is generate 60 DALL-E images per second and little else.

    fidodo,

    It will get better, but in the mean time you just manually tell the AI to try again or adjust your prompt. I don’t get the negativity about it not being perfect right off the bat. When the magic wand tool originally came out, it had tons of jagged edges. That didn’t make it useless, it just meant it did a good chunk of the work for you and you just needed to manually get it the rest of the way there. With stable diffusion if I get a bad hand you just inpaint and regenerate it again until it’s fixed. If you don’t get the composition you want, just generate parts of the scene, combine it in an image editor, then have it use it as a base image to generate on top of.

    They’re showing you the raw output to show off the capabilities of the base model. In practice you would review the output and manually fix anything that’s broken. Sure you’ll get people too lazy to even do that, but non lazy people will be able to do really impressive things with this even in its current state.

    Theharpyeagle,

    I mean, it took a couple months for AI to mostly figure out that hand situation. Video is, I’d assume, a different beast, but I can’t imagine it won’t improve almost as fast.

    Natanael,

    This would work very well with a text adventure game, though. A lot of them are already set in fantasy worlds with cosmic horrors everywhere, so this would fit well to animate what’s happening in the game

    archomrade,

    For the limitations visual AI tends to have, this is still better than what I’ve seen. Objects and subjects seem pretty stable from Frame to Frame, even if those objects are quite nightmarish

    I think “will Smith eating spaghetti” was only like a year ago

    catherine,

    OpenAI introduces Sora, its text-to-video AI model. Sora is designed to generate video content from textual descriptions, directv cable and internet package offering a groundbreaking tool for content creation and storytelling. This innovative model showcases the potential of AI in transforming multimedia production and creative industries.

    jownz,

    The folks with access to this must be looking at some absolutely fantastic porn right now!

    webghost0101,

    Oh its going to be fantastic all right.

    Fantastical chimera monster porn, at least for the beginning.

    helpImTrappedOnline,

    Honestly, let’s make it mainstream. Get it to a point where it’s more profitable to mass produce Ai porn than exploit young women from god knows where.

    myxi,
    @myxi@feddit.nl avatar

    I don’t think they would make a model like this uncensored.

    dylanTheDeveloper,
    @dylanTheDeveloper@lemmy.world avatar

    ‘obama giving birth’, ‘adam sandler with big feet’, ‘five nights at freddy’s but everyone’s horny’

    possibilities are endless

    UndercoverUlrikHD,

    Looking forward to the day I can just copy paste the Silmarillion into a program and have it spit out a 20 hour long movie.

    platypus_plumba,

    I was thinking exactly this but with the Bible. Not because I like the Bible but because I’d love to see how AI interprets one of the most important books in human history.

    But yeha, the Silmarillion is basically a Bible from another universe.

    msage,

    Which is why christians are scared of them. It will open people’s eyes to how anyone can write a fairytale. And so much better ones, too.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • technology@lemmy.world
  • rosin
  • GTA5RPClips
  • ngwrru68w68
  • Youngstown
  • InstantRegret
  • everett
  • slotface
  • thenastyranch
  • DreamBathrooms
  • kavyap
  • tacticalgear
  • cubers
  • khanakhh
  • magazineikmin
  • JUstTest
  • tester
  • cisconetworking
  • ethstaker
  • osvaldo12
  • mdbf
  • Leos
  • Durango
  • normalnudes
  • modclub
  • provamag3
  • megavids
  • anitta
  • lostlight
  • All magazines