While I am quite excited about the Walton Goggins-infused Amazon Fallout series, the show debuted some promo art for the project ahead of official stills or footage and…it appears to be AI generated.
My guess is that AI’s first big victim for graphic design will be stock art. Previously, crap like that background asset would just be stock purchased from Getty or Adobe stock. Now it can be generated.
I’m already starting to use it instead of paying for bullshit licenses.
I’ve been using AI for school and work, as God intended: give it the raw, have it do the grunt organization work, and then proofread to correct anything.
There is very little to say that hasn’t been said. For an example of our limitations as humans, there’s only 50ish unique plot lines in the English language. To expect each person to be completely original is asinine.
It’s a tool, one of many in my toolbox. People who are just flat against any and all AI or LLMs are behind the curve.
For an example of our limitations as humans, there’s only 50ish unique plot lines in the English language.
How would the unique plotlines be determined by the language they’re told in? Why would the amount of plotlines be based on human cognitive capabilities? None of this makes sense.
Either way, “unique plotline” doesn’t mean anything, from the perspective of literary or narrative studies. There’s no universal, objective way to dissect narratives, and they cannot be boiled down to a distinct number of basic models. There have been attempts to get to the most fundamental narrative model (Greimas, Campbell), but they’re far from widely accepted.
People who are just flat against any and all AI or LLMs are behind the curve.
Art is, by itself, not something that has “the curve”. If you’re doing something with very practical goals and need hyperproduction, sure, but art is not necessarily made or consumed with such a logic.
Pretty much.
People very frequently complain about AI taking the jobs of artists. But if the money was never actually going to be put on the table for artists to claim, I really don’t think that was going to help much.
That doesn’t mean I hate artists what do, absolutely not. It’s just that artists are people and people are limited in how much they can do at any single time.
For the past couple of months. I’ve currently been waiting on multiple artists to finish up their commission queue. And one of which I’m worried I’ll have to turn away because of a variety of life changes in my life that’s led me to losing my job and me having reduced income.
As of right now, the costs of generating a picture with a tool like Stable Diffusion or DALL-E has been pretty low, the former even being free if you have the right hardware. And these systems manage to be almost always available, as well as being capable of working in a matter of seconds.
Of course, that doesn’t change the fact that these tools are only good at painting the bigger picture. They have a tendency to choke on the smaller details. And I would personally rather wait for an actual person to be available to work on something original that’s also capable of filling a niche that AI models have yet to be trained on.
This entirely disregards the fact that the training of these models was done on human artists’ work without consent or renumeration. As it is, it is not “AI”, It is just a glorified plagiarism machine. Not to say it isn’t impressive, but it has already stolen work already done by artists and further stealing upcoming work by mashing together older works.
There’s ways to do it ethically by training on artwork with permission kind of like how Adobe is doing it, but that isn’t going to have as wide of a reach as the other free ones.
but it has already stolen work already done by artists and further stealing upcoming work by mashing together older works.
You keep using that word “stolen”, I do not think it means what you think it means.
Also, AIs do not “mash together” works from their training sets. This is a very common and very incorrect conception of how they work. They are not collage generators or copy-and-paste machines. They learn concepts from the images they train on, they don’t actually remember fragments of those images to later regurgitate in some sort of patched-together Frankenstein’s Monster.
You’re correct but it’s still too early and most people haven’t spend enough time with AI to fully understand. Maybe they never will.
Like the classic quote says, it is difficult to get a man to understand something when his salary depends upon his not understanding it.
I just asked Wombo Dream to make the Mona Lisa and it did. Sure, you can tell it’s not exactly the real thing, but I don’t know how you can say it didn’t copy any of the actual Mona Lisa original.
I considered including mention of overfitting in my earlier comment, but since it’s such an edge case I felt it would just be an irrelevant digression.
When a particular image has a great many duplicates in the training set - hundreds or even thousands of copies are necessary - then you get the phenomenon of overfitting. In that case you do get this sort of “memorization” of a particular image, because during training you are hitting the neural net over and over with the exact same inputs and really drilling it into them. This is universally considered undesirable, because there’s no point to it - why spend thousands of dollars to do something that a copy/paste command could do so much better and more easily? So when image generators are trained the training data goes through a “de-duplication” step intended to try to prevent this sort of thing from happening. Images like the Mona Lisa are so incredibly common that they still slip through the cracks, though.
There’s a paper from some months back that commonly comes up when people want to go “aha, generative AI copies its training data!” But in reality this paper shows just how difficult it is to arrange for overfitting to happen. The researchers used an older version of Stable Diffusion whose training set was not well curated and is no longer used due to its poor quality, and even then it took them hundreds of millions of attempts to find just a handful of images from the training set that they could dredge back out of it in recognizable form.
People have also copied art for as long as art has existed. You can buy a copy of the Mona Lisa in the gift shop, or print your own. That’s why the market for art has always been hyperfocus3d on ‘originals’. But rarely are the artists the ones getting rich off their art, especially now. I hate capitalism as much as anyone but if your motivation for making art is money you’re in the wrong business and your art probably isn’t that good anyway.
Graphic designers aren’t the first. Automation ended a lot of jobs for decades. Ai is just a form of automation.
The wheel is a form of automation
Yeah but the wheel was discovered before rent
They said AI, not automation.
The comment you are replying to linked AI and automation. It makes total sense in this context.
They will be generating it themselves soon enough. I contributed some stock photos in the past. They recently sent me info about their new contribution pipeline, for content that may not pass the usual quality threshold, but will help train the models. If they do it right, who knows, maybe they can get better results worth paying for.
The fun part here though is they dont have copyright on that art. If any of the “stock AI footage” becomes iconic, its public domain.
Dicey spot for a studio to be in, but it does save some bucks, so they are plowing ahead.
You should consult with a lawyer first. The amount of misinformation circulating on the Internet about how AI art is all public domain is enormous. That recent court case (Thaler v. Perlmutter) that made the rounds just recently, for example, does not say what most people seemed to be eagerly assuming it said.
im also someone who has been misinformed on the AI art copyright status. could you explain how it actually works or link to a resource that does? i tried searching around for a bit but couldn’t find a clear consensus on it.
It will be really interesting to see how the case law develops. Personally, I am more interested in things on the IP side. A lot of lawyers I work with currently view LLMs like a shredder in front of a leaf blower. Which, it kind of is.
Neither do they have copyright of the stock art they used to purchase. The complete piece, however, including pip boy, is not AI generated. Someone put this together, put effort into it, which easily qualifies it for copyright protection, even if the background is AI generated instead of bought stock art.
If you’re talking about that recent legal case, look again. The artist made the claim that the AI was the sole author, but that he should own the IP. I think the vast majority of people would claim that, in it’s current state, the AI is a digital tool an author uses to make art. The recent ruling just reconfirm that A machines aren’t people, and B you can’t just own another author’s work.
I don’t even mind the use of AI art in this context, but the fact that they couldn’t be bothered to do a little touch-up speaks a lot to the quality that can be expected from their show.
Their absolute mangling of the Wheel of Time tells me exactly what to expect from this show.
I didn’t even get past 30 min. After seeing what they did to Mat’s family and character I was out.
They did Mat so dirty the actor playing him peaced out before they finished filming the first season
Perrin is already married? Nope, out.
The LOTR show that they spend a cool billion on is awful as well…
But Egwene could totally be the DR, you guys…
🙄
That means no copyright- woohoo go nuts!
Quoting the U.S. Copyright Office’s own guidance:
In other cases, however, a work containing AI-generated material will also contain sufficient human authorship to support a copyright claim. For example, a human may select or arrange AI-generated material in a sufficiently creative way that “the resulting work as a whole constitutes an original work of authorship.” Or an artist may modify material originally generated by AI technology to such a degree that the modifications meet the standard for copyright protection
Don’t go nuts.
YANAL
WANALS
What’s the S?
This is a viral marketing campaign. I hadn’t heard of the show, now I have. It’s a fuck you to artists and a planned rage bait to get people talking about the show.
Yeah I thought so, there would be no way that they’d screw up like that unless it was intentional
I’m a little skeptical that it’s AI generated because a lot of those details could be the result of kitbashing, which is especially common with concept art (here’s an example from Guild Wars 2). It could be they just grabbed a piece of concept art, slapped some promo stuff on top of it and called it a day. That said, considering how much of a hard-on Hollywood has for AI, I wouldn’t put it past them to generate promo art with an AI.
I wasn’t planning on watching it anyway, but I wanted to throw in my two cents.
That car is such an AI dead giveaway.
That’s clearly not kitbashing, when you have a car completely backwards and don’t even bother to fix it. Why would the perspective be mostly correct yet be backwards? You’d have to pull from two sources that had a) the exact same art style, b) have the same perspective, yet c) have one of the cars be backward. And finally d) not give a shit about it to correct it.
The red car literally looks like it’s backwards lmao
deleted by creator
WoT was utter garbage. LOTR was meh to good depending on who you talked to. They are definitely not on the same level.
Yeah, I enjoyed my watch of LOTR: ROP. I’m a big Tolkien fan too. I read the trilogy and hobbit once a year and the silmarillion once every three or four (it’s dense af). ROP wasn’t an accurate adaptation, but it’s a fun fan fic that I felt was respectful to original material. Plus it has some awesome visuals and great sound track. There’s definitely things to criticize about it. It’s far from a perfect show, but it’s not bad. I’ll probably do another watch through when the next season gets closer to being released.
WOT though, it hurt my heart…
I’m sorry, but you need to go back and carefully read the books again because they fucked up way too much in this show for it to even be called a “fun fan fic”. It is just a bad fan fic. There’s countless random moments in this show that don’t get explained, characters that are written with modern lenses, Sauron exhibiting incel behavior, scale feeling tiny compared to the source material despite having a billion dollar budget, characters literally singing about no one getting left behind and then doing just that. There’s a lot to criticize about this show and it never gets better. Sure, the special effects are nice, but that in no way makes up for the lack of substance.
Everything you described is very common in fan fics and why I called it as such. I don’t disagree with your points (most of them), but I do disagree with the severity of the condemnation for those points. I thought it was fine, not stellar, not terrible, it was fine and enjoyable for what it was. A high budget fan fic made by people who respected the lore but wanted to do their own take. Which is what Rankin/Bass and Jackson both did as well. Those are fun too. A huge amount of book fans were livid with Jackson for the changes he made in his adaptation too when the films released. If you think ROP is inaccurate then definitely don’t go back and watch the Rankin/Bass cartoons, you might blow a gasket.
Let people enjoy things.
I’m in the LOTR was good camp. I liked how it took its time. It felt vast and expansive but still coherent. And it was beautiful to look at. I hope the subsequent seasons can keep the quality up despite the changes to production.
When they actually get around to filming due to the strike lol
Thinking about all the money wasted on the LOTR show that could have gone into other programming.
Well this is already taking points off for sure. But let’s see if the show is good. If they stay true to the games and create a truly unique show, maybe it will be worthwhile.
The current fallout developers aren’t even staying true to the games anymore so my hopes aren’t high for the series
Why must you hurt me with the truth? Counterpoint though: Almost heaven
How will they stay true to the games? It’s not The Last of Us where you literally play through a story. Fallout is all about exploring the wasteland at your own pace and shaping the world as you see fit.
Every game has had times when I’ve sat and seriously considered a choice that had massive consequences and mixed both benefits and steep drawbacks. Like in The Pitt, where you have to decide whether to kill a baby’s parents in front of it to liberate the people enslaved there. On a tv show, they’ll have the main character… mull it over and make a decision for the audience? How do you even translate that?
Exactly. They just have to follow the in game universe and how it operates. Obviously they can not have us as watchers dictate what happens but instead transfer that look. , Feeling to the person on screen. As long as they get the universe of fallout correct it will do good. All they have to do is make a good story if they fail at that then gg.
I don’t personally care so much about strict (some amount is needed) game-lore following but I would want to have it feel the same if that makes sense.
I want to feel the wastes and the personalities and ideologies that have reigned supreme since the bombs fell. I wanna feel that mystery and that goofiness that I’ve come to love from each of these games.
Can’t flop harder than 76 did and I got a feeling Bethesda is gonna take extra long on the next one, hence this tv show to satiate for a bit (maybe who tf knows it’s Bethesda and Amazon man)
This is an interesting question actually. In my head, “staying true to the games” initially referred to how the game operates like the other commenter said e.g.
-
How different bodily needs are met. To quench my thirst, do I boil the dirty water and just take some RadAway? How much radiation does this InstaMash have? If a character in the show drinks from an irradiated lake and somehow isn’t affected by the next plot device, how “true to the game” is that? If I do that in any of the Fallout games, I’d be running into Deathclaws with only a fraction of my max HP.
-
VATS. Will time be stopped or slowed down while the characters are selecting and terminating their targets? There’s a lot that can go well here especially since it’s an opportunity to inject slow-mo Hollywood-style shooting scenes, but can you imagine if they don’t put any slow-mo at all? In my opinion that would show a huge lack of understanding of the games.
-
To your point, decisions. Unfortunately I think making decisions for the audience is unavoidable here unless the show becomes something like Netflix’s interactive specials. However, some good ideas might include reproducing quests similar to the ones from the games and then making decisions based on data they may have gathered from game quests. Take the Megaton Bomb quest for example. Maybe the show will force a character into deciding between blowing up a city or not at the twilight of a story arc. In the end, they decide to blow it up. Then, during the credit roll, they show that most people in the games who did the Megaton quest actually blew up the city. I don’t actually know what the real stats are, but I think it would be a good idea for the show’s characters – to a certain reasonable extent, because if we blew everything up like in my last playthrough it wouldn’t be a very good show – to follow the patterns of most decisions made by the playerbase in the games. I’d see that as an attempt to reconcile the disconnect between playing a game(lots of control) vs. watching a show(no control).
-
Without someone pointing it out you’d have not even noticed that it’s AI generated. As most people don’t look at this art longer than a second.
I probably wouldn’t have but if there are errors as big as those and they’re trying to slide it by me, that’s pretty slimy.
Sad
Just throwing this out there because we don’t know for sure - but my hope is that Amazon paid their graphic designer/illustrator the regular rate for something like this and they saved themselves 90% of the time it would have taken by using Stable Diffusion and then took the rest of the day off.
deleted by creator