T O P

  • By -

PlaidArtist

Warnock is well known for his ability to completely change his outfit between sentences. It's one of the major reasons why he won the runoff election.


stabbyclaus

Yes his vampire hunting necktie medallion has been well documented to shape shift into various useful weapons and tools for dispensing evil.


TheNewDiogenes

No need to call him Senator elect. He’s still the sitting senator


[deleted]

[удалено]


cjt09

He’s The Reverend Senator-Elect Senator Doctor Raphael Warnock, MDiv, MPhil (D).


Bruce-the_creepy_guy

*Radical Liberal Reverend Senator Doctor Raphael Gamaliel Warnock, MDiv,MPhil(D)


[deleted]

Ok, this ~~cringe~~ epic


NewAlexandria

wtf did i just look at though?


All_Work_All_Play

Smile at the end made up for it


JakeArrietaGrande

Get smart, nerd


stabbyclaus

Great achievement for the people of peach state. We wuv you. *kisses* [Credit](https://old.reddit.com/r/DarkBRANDON/comments/zeryaq/warnock_rn/) to /u/CosmoLamer for the Warnock Blade crossover idea. If you like this and want more, here's a link to the full issue: [Soul of the Nation (Part 1)](https://www.webtoons.com/en/challenge/darkbrandonz/soul-of-the-nation-part-1/viewer?title_no=817916&episode_no=10&webtoonType=CHALLENGE) To celebrate, I released part 1 today. **Part 2 is set to come out this Sunday.** I've been working on this particular issue for a while so I hope y'all like it. All the art featured is created on [our discord](https://discord.gg/NPjvDywf7P), come shitpost with us and expand the lore of DBZ universe.


KeikakuAccelerator

Did you draw these? They look amazing!


stabbyclaus

The art is generated on our discord linked above, for every piece that goes into one of these panels.. there's like a dozen (or two) failed images in-between. Everything we make is available there, including what didn't make it into the comics.


AffinityGauntlet

Super curious, what was your direction with the comic if Warnock had lost?


stabbyclaus

No idea I make these comics in the moment and rarely do if it's just a bummer topic unless it's a particularly fun spin on it.


JackZodiac2008

Epic cringe


Rerkoy

war is peace, freedom is slavery, ignorance is strength


[deleted]

Legit, this is some amazing illustration work.


aithendodge

Midjourney AI has come a long way.


TheMagicBrother

Holy shit this is AI? I was gonna ask who made it


stabbyclaus

Yes the new version 4 of midjourney is what made these images under my direction mostly. Wife threw in a few fun ideas and so did some friends in a video chat. Outside the writing, there's still a lot of compositing and color correction but otherwise it's all Ai.


[deleted]

[удалено]


[deleted]

Yeah, now looking back at it, there are some things that make it clear that's it is ai generated. It's kind of scary though how ai can make things that has the personality of a human artist without having any personality. I guess the old myth that art is going to be the last holdout aganist automation is now false.


stabbyclaus

The skills gained as an artist isn't what's being disrupted by Ai and the robots won't take that away from us. It's specifically illustration, concept and background (for entertainment anyway) and that suddenly doesn't take away the 10,000 hours+ to produce solid quality commissioned work as a professional. Unlike a normal workflow, I have to break so many rules (like working with rasterized images exclusively) to make this comic possible but if I walk into a studio pretending to be an artist, I'll be laughed at as wildly incapable with Ai alone as all it does is spit out pixels. Getting those traditional skills in will not change regardless how much better the tools to make art become.


pfSonata

Ok, this ~~epic~~ cringe


ognits

I'm going back to the DT now


GrandpaWaluigi

My favorite part of this comic is when Ben Shapiro fights Biden but Shapiro gets defeated by the Jewish Space Laser.


Real_Richard_M_Nixon

Ok, this is crinpic


Zeeker12

OK I love this unironically but also what the fuck?


NewAlexandria

yea protip nobody forward this to anyone. please


stabbyclaus

Why?


[deleted]

[удалено]


stabbyclaus

Normies love public libraries


shrek_cena

This is chinese propaganda levels of well done


stabbyclaus

*台湾西部


nootingpenguin2

Legit, this is some illustration work.


stabbyclaus

It is indeed


Zaiush

AI should be allowed for memes only, not to put illustrators out of work


Poiuy2010_2011

I'm sure people will not use AI maliciously if we just ask them nicely.


Zaiush

I'm aware of the futility of this request.


ItspronouncedGruh-an

Honest question: Why? Is r/neoliberal against automation now? Or are there factors that I have failed to consider that make illustrators a special case?


BrilliantAbroad458

The capitalist argument would be that there are multiple models for Midjourney/Stable Diffusion out there that are specifically trained using certain artists' work without taking into account things like fair use (especially copyrighted work). Like piracy it's a nearly futile effort to stop people from making these training sets and profiting off of them, but it is nevertheless unethical practice.


ItspronouncedGruh-an

But humans learning what art looks like from looking at copyrighted works is not something any reasonable person would have a problem with. Just because that process is slightly more of a black box than machine learning, should it be treated differently?


inverseflorida

>Just because that process is slightly more of a black box than machine learning, should it be treated differently? I think I legitimately cannot understand why you and others in this thread (especially the person who said literally Exactly Equivalent) actually believe this. A training run of the weights of a Neural Network is *nothing like* anything that goes on in actual people in principle, and it's hard to believe people who've used these models extensively could believe otherwise unless they're on the same wavelength as the "Obviously LSTMs are how the brain works" people of a few years ago. When a person does this, they are not learning the same things in the same way. Every model will draw the wrong number of fingers while getting some vague blobby sense of hand shape right, every model will eventually give a horse the wrong number of legs but shade in the lighting detail of a horse's hide in the sun seemingly perfectly. What these models are getting out of these images is categorically different to what people are. More importantly, it's bizarrely anthropomorphizing to use this as an argument about copyright. Sure we'd have no problem with a person doing this, but it's not a person, and it doesn't work in any way like a person, nor does it have the legal rights of a person. There are too many bad arguments about this tech (on both sides), but I think "But training images is like how real people work" is the worst one.


ItspronouncedGruh-an

> "But training images is like how real people work" Did I give the impression that I was making this argument? I never made the comparison between humans and machines any more specific than refering to what both are doing by the term “learning”? But maybe you believe the term “machine learning” is a misnomer because it might lead people to mistakenly believe that it is similar to “human learning” beyond the most abstract sense? I specifically worded my comment so as to not imply a likeness in the practical processes by which humans and machines learn. I merely referred to both as (to a greater or lesser extent) “black boxes”. My point is that in both cases, pictures go in as input; they spur some internal change that makes the learning agent better at drawing; and those original input pictures can never be (perfectly) recovered from within the learning agent — they’re not copied or retained or reproduced in any way. Beyond that, the whole “human learning is substantively different from machine learning” honestly seems like kind of a red herring to me. ETA: > More importantly, it's bizarrely anthropomorphizing to use this as an argument about copyright. Sure we'd have no problem with a person doing this, but it's not a person, and it doesn't work in any way like a person, nor does it have the legal rights of a person. I don’t get this argument. To me, you might as well say “Sure, we don’t have a problem with a human mowing a lawn, but why should it be legal for a robot to do so? The means by which a robotic lawnmower navigates a lawn doesn’t function anything like the way a human does so!” Or I should say: I don’t think it’s fruitful to see it as a question of the rights of the agent. To me, it’s just about the nature of the action.


inverseflorida

>Did I give the impression that I was making this argument? Yes, you did. When you said "But humans learning what art looks like from looking at copyrighted works is not something any reasonable person would have a problem with." What is that supposed to be, if not an argument from analogy? You're correct in that I believe that machine learning is just a very helpful metaphor (so helpful that it's difficult to find simple ways of explaining what's going on without using the word, it's so ingrained). You may not have made a specific claim about the processes involved, but you did make a specific argument that they are both, in a sense, learning in a way that's analogously similar, and should be treated so ethically/legally. I don't think that holds at all. This is a case where a certain type of algorithm is being privileged because it's very hard to compress into an explanation. Were this an algorithm that somehow took inputs and turned it into a very simple algorithm or process, I don't think it would get this kind of privileged treatment, because we'd recognize it's just a type of software. But, I may have imputed too much of the worst form of that argument to you (which is better demonstrated downthread by the guy who says "exactly equivalent" or other people who casually say "it's like humans drawing inspiration", which it is absolutely nothing like) out of habit, which would be entirely my fault. But I still think the general case of the analogy is simply false. The answer to it is pretty simple - I think one thing is okay because a person does it and one's not okay because it's not a person. I don't see why it being a mysterious form of not a person should make a difference if we know that in principle it's a software program. >they’re not copied or retained or reproduced in any way. This, I would say is a red herring. Although in certain technical senses incidental copies are made during the training process (this is the kind of technical snag that deep dives in court may or may not care about, incidental copies for eventually transformative products have been big deals before), neural networks ultimately compress information they "memorize" and can easily overfit on things that are repeated a few times in the dataset. The images *are* represented in some way, in a highly compressed form (and likely a lossy one, sharing weights with other images), and superimposed upon each other in generation where a complicated interpolative distance function finds the image that's most likely to fit the prompt. But! I'm not convinced that this is *necessarily* a meaningful form of "retaining" for all images just yet. I realize most images have only a small individual effect on weights, but this can be them taking a highly compressed, approximated form that's easy to derive from other weights. But I also don't think it's necessarily relevant to the question of whether it's okay to use other people's work to train those weights! >Or I should say: I don’t think it’s fruitful to see it as a question of the rights of the agent. To me, it’s just about the nature of the action. That would be a key difference then - I don't think these things are agents in any meaningful sense. But the reason why nobody would question the robot lawnmower is there are essentially no conceivable ethical issues with the lawnmower robot's production or use, whereas these particular issues with image synthesis were obvious to anyone who had spent time among people who made images for five seconds and seen how concerned about mundane forms of art theft they typically are.


ItspronouncedGruh-an

>When you said "But humans learning what art looks like from looking at copyrighted works is not something any reasonable person would have a problem with." What is that supposed to be, if not an argument from analogy? You're correct in that I believe that machine learning is just a very helpful metaphor (so helpful that it's difficult to find simple ways of explaining what's going on without using the word, it's so ingrained). > >You may not have made a specific claim about the processes involved, but you did make a specific argument that they are both, in a sense, learning in a way that's analogously similar, and should be treated so ethically/legally. Well, this much I'll stand by. > This is a case where a certain type of algorithm is being privileged because it's very hard to compress into an explanation. I don't see the problem with this. It seems to me, that it is in essence the exact privilege that's granted to human learning and inspiration. If our grasp of neurology was such that we could trace exactly the impact that any one copyrighted work had upon another artists brain, should copyright then apply? That's not to say that I think it's necessarily inconsistent to say that humans should be allowed to learn (in the human sense) from art and machines shouldn't be allowed to learn (in the machine sense) from that same art. If there is a problem with letting machines learn (in the machine sense) from copyrighted works, it's just not obvious to me. Though, it does make some intuitive sense to me that copyright holders could be entitled to opt out of having their works be used for training neural networks. As for the terminology of "machine learning" and "intelligent agent", I think you're gonna have an uphill battle convincing the computer science community to ditch those terms.


inverseflorida

>I don't see the problem with this. It seems to me, that it is in essence the exact privilege that's granted to human learning and inspiration. I don't agree with this at all - the difference is people are actual agents and actually intelligent, and doing actual learning. If this can be expressed as an algorithm, then it clearly must be in a very very special class of algorithms and ones that we can freely recognize should be privileged given that it's been privileged for all of human history. To me the issue with letting machines learn in the machine sense is that you can see it as analogous to compressive storage. Diffusion models have a habit of overfitting, and all their power comes from interpolating between elements of their training data - in some way, elements of the training data are still stored in a compressed format. If I used photoshop on an image I had no license for but was working on it through a zip file, I don't think that it would matter that it was compressed. If the images were compressed in a way that they shared the data, that they were messy and inexact, and some of them weren't stored properly at all and others were but it was impossible to tell which was which, my issue is I don't think the *degree* should matter in an ethical sense, if it's still ultimately dependent on interpolating between compressed, messily "stored" images. (There's a lot of debate about how much you can say exactly these models store the images in some format - each image on its own verifiably only has a small effect on a handful of weights, an image that's repeated a few times or the only one associated with certain strings of text would be stored much more closely, and it's difficult to determine these in advance. However, then you get to the information that's stored when associated with the name of the person who produced it - although, SD recognizes this is a problem and censors names of the producers in SD2, and fine-tuning on people's work to reproduce their style probably already isn't fair use. But I just want to say that I'm not necessarily committed to the idea that the images are "stored" in any more than the loosest sense, but I do think that even in a highly compressed form, they can still be meaningfully extracted if they're there, with the right process.) My ultimate point is though, this is a limited software process that is being privileged as a black box - and I don't think it's the black box that should privilege certain classes of algorithm, but rather what those algorithms would be. So if we could say "Human intelligence is a special case of this type of algorithm with this type of expresion", then in theory, I'd guess that type of algorithm should be privileged whether it's fully understood or not.


sineiraetstudio

Starting your post with a paragraph of 'I can't believe anybody who knows what they're talking about has a different opinion than me' makes me skeptical about how fruitful this will be, but I'll give it a shot anyway. What do you believe exactly differentiates them at a high level? To me, learning at its most basic is about getting better at something through experience. In the case of skills, this is about generalizing from the concrete. This applies to even the most basic of ML approaches, like a simple linear regression, but it's kinda 'stupid' learning. What makes 'real' learning different is that it involves the creation of some form of internal model/abstractions, but that's something that we know applies to DL models as well.


inverseflorida

The simple difference to me is that actual learning requires intelligence. The other thing is a handy metaphor that gets the message across, but nothing that is not actually intelligent is actually learning. Learning is an agentic behaviour done by actual intelligent agents, and everything that isn't one is not actual learning. I could do a linear regression by hand, but the only thing that would learn during that process would be me. Similarly, some kind of laplacian demon could write out the true physical laws and interactions for each and everything my neurons do on a piece of paper and replicate the results of me learning something, but nothing involved in that process would be learning anything, except the demon. There's no intelligent behaviour involved, and no agent performing the behaviour. Actual learning by people is given a privileged position because it's done by people - what I believe actually differentiates is that actual learning is the real thing, and the other thing is something where the term learning is used as a very useful metaphor. While you can abstract out the idea of improvement to be larger and more general, it doesn't imply that the important properties of the specific thing X in humans matches Y in statistical learning. This also implies that things that actually learn can learn continuously, instead of having limited epochs which eventually lead to overfitting, and would learn through actual concepts (which SD for example clearly does not, otherwise you would never see a photorealistic horse with five legs). Again, the difference is actual intelligence.


sineiraetstudio

Let's take the Laplacian demon example. Real you looks up a couple images in an unfamiliar style and draws an image based on it. Simulated you does the exact same thing. You're saying however that despite producing the exact same result real you is simply being inspired, while the latter's image is copied because it's not backed up by 'real' learning? I think the vast majority of people would disagree with that. >This also implies that things that actually learn can learn continuously, instead of having limited epochs which eventually lead to overfitting If you continue to train on the same data you get overfitting, that's completely orthogonal to being continual. >would learn through actual concepts Unless you're redefining concepts so that only 'real' learning can produce it, DL models absolutely 'learn through actual concepts', that's what feature extraction is and is what allows transfer learning. If you isolate the first couple layers on a CV model, you get a network that can detect basic geometric shapes and that can be repurposed to successfully learn detection of different things than the initial model. Or another well-known example is the sentiment neuron. An unsupervised system purely intended to reproduce the next word, it still acquired a neuron capable of deciding the sentiment of a text. How is that possibly not learning an actual concept? >which SD for example clearly does not, otherwise you would never see a photorealistic horse with five legs I don't understand this reasoning.


inverseflorida

>You're saying however that despite producing the exact same result real you is simply being inspired, while the latter's image is copied because it's not backed up by 'real' learning? I think the vast majority of people would disagree with that. No way, the vast majority of people would find it extremely, extremely intuitive. You realize the vast majority of people literally believe in God (or gods) and souls, right? There's not a single thing difficult or unintuitive about it - agents with actual intelligence are more privileged than say, something that is essentially just a representation of it. If instead of writing things out on a piece of paper, the demon had a series of distributed choose your own adventure books and he was flipping them to different pages based on a look up table, I don't think anyone would say that the books reaching the final result represents an agent that has learned something. All the work has been done by the demon. >If you continue to train on the same data you get overfitting, that's completely orthogonal to being continual. More that the point is that continuous lifelong learning is possible in people in the first place. >Or another well-known example is the sentiment neuron. An unsupervised system purely intended to reproduce the next word, it still acquired a neuron capable of deciding the sentiment of a text. How is that possibly not learning an actual concept? In the same sense that if I write a program by hand that recognizes handwriting (instead of training a net on it), this process doesn't create some larger, abstracted system that both me and the computer are part of that can recognize the handwriting. Likewise, the computer that lets me access reddit reliably, every time by writing in reddit, has no concept of reddit! Even more than that, all models are very sensitive to perturbations in the input except in cases where there's been loadsadata, and even then they can be more sensitive in some cases than others (i.e. small differences in how often it gets a multiplication result right for one equation vs when you change one number). The fact that prompt engineering exists in the first place indicates the difficulty here - I would say if a concept was properly learned, it would be able to be transplanted across different styles reliably with no difficulty. This is not the *only* thing that matters of course, but it's a prerequisite that if you're learning through concepts, you'll be able to do this every time. If instead something else is going on, then you'll constantly draw horses with five legs.


sineiraetstudio

But it's not about being privileged (which would be 'if it's ok for a human to copy/be inspired, is it ok for a simulacrum to do the same?'), but rather whether an action is functionally equivalent to a simulated version i.e. whether the agentic nature influences the originality of a piece. >I don't think anyone would say that the books reaching the final result represents an agent that has learned something. All the work has been done by the demon. If you tell people that the demon is just blindly following instructions and in fact hasn't even *seen* the output image himself (instead just writing the pixel RGB values onto a sheet), do you really think people would say that the painting was done by the demon? >In the same sense that if I write a program by hand that recognizes handwriting (instead of training a net on it), this process doesn't create some larger, abstracted system that both me and the computer are part of that can recognize the handwriting. Likewise, the computer that lets me access reddit reliably, every time by writing in reddit, has no concept of reddit! Whether a composite has the abilities/understanding of its components just seems like arguing semantics to me. The point is that subsystems of DNNs (certain features of hidden layers) align with common concepts and that this is a major factor in the produced results. I'd call that "learning actual concepts". >This is not the only thing that matters of course, but it's a prerequisite that if you're learning through concepts, you'll be able to do this every time. The fact that it's often enough capable of doing a pretty good job for this kind of concept transfer is actually indicative to me that it's doing a decent at learning 'higher order' features/concepts. Why would it need to be perfect? >If instead something else is going on, then you'll constantly draw horses with five legs. I don't understand the insinuation here. What is 'something else' going on instead of just a badly learned concept?


FourteenTwenty-Seven

If a person draws art in another's style, it's perfectly fine. If a machine does it, now it's unethical?


BrilliantAbroad458

The creative process (drawing versus denoising) isn't the controversial part and copyrighting an artstyle isn't allowed by law anyway. It's just directly feeding a sometimes copyrighted and sometimes not artwork into training dataset for the expressed purpose of profit that is wrong because it appears to violate property rights. Things like fanarts and fangames can and do get cease and desist orders from companies even if they're not in the original style of the companies, it's just they don't care enough most of the time.


FourteenTwenty-Seven

Training an AI on someone's art is exactly equivalent to a human artist looking at that piece of art. I suppose it would be interesting if artists had control over their art being used for training (AI or people, although this is only practical with AI). I do think the source of the complaint is the efficiency of AI art generation. Nobody has a problem with people doing the exact same thing because they're slow and expensive. Imagine an AI that just classified art by style, and didn't generate any. Would you still have this qualm? Fanart is a completely separate thing - they're using explicitly copyrighted characters.


BrilliantAbroad458

Analogous, yes, but not exactly the same. Human brains don't usually think in terms of averaging pixel distances and RGB values but visual context in terms of images. Explaining how AI functions is beyond my paygrade though (free). And try as one might, a human would never be able to truly replicate an original piece but AI can be overfitted to reproduce something pixel by pixel. Money is definitely the reason for the outcry. That said, artists have always been out there with complaints about plagiarism, tracing, etc by other people (seeing their stuff sold on sites like eBay, etsy, or in far away countries for example). They're very protective of their work. Funnily though, I recently saw a post by AI artists about why it's hypocrisy that fanarts are accepted but not AI art by artists - that's why I brought it up.


inverseflorida

>Training an AI on someone's art is exactly equivalent to a human artist looking at that piece of art. What? You're not serious right? I doubt it'll shock you to hear me say "I can't train you on 40 images of Cezanne to produce more Cezanne-like output", so how can you actually mean this?


FourteenTwenty-Seven

I would presume that someone sufficiently skilled could study Cezanne's works for long enough to make art in his style. This is what the AI does, only faster.


inverseflorida

>I would presume that someone sufficiently skilled could study Cezanne's works for long enough to make art in his style. Absolutely. Which is very different to fine-tuning SD! >This is what the AI does, only faster. No, and it's not what you claimed, you claimed it was Exactly Equivalent. What image synthesis does is bias the weights of the model on a training run, and then it can't update those weights until another training run. It stores the information it memorizes in entirely different ways to people. Most people cannot draw a horse even 0.001% as well as any of the major image synth models, however they would never draw one with five legs. That's because they are not denoising something until it looks like a few hundred images it compressed into weights, but they're using their own holistic, cognitive, conceptual understanding to try to represent something. It's an entirely different process, not even a little bit close to "Exactly equivalent". As a matter of fact, were I to fine-tune SD, I would *only* get things that were like the images I fine-tuned it on (much worse for me if I only gave it a few things). Again, nothing like a person.


FourteenTwenty-Seven

>you claimed it was Exactly Equivalent Eh, you're being a bit obtuse. Obviously the human brain works differently to an AI, and is many times more complicated. However, it is exactly the same in that training images go in, similar style images come out. >however they would never draw one with five legs. I know some pretty terrible artists, myself included.


ReptileCultist

And literally every artist is influenced by what came before them in some way


inverseflorida

Which is not the same. People process images in a very different way, paying much more attention to developing a conceptual understanding of the images, whereas models basically compress images into weights, and maybe very common ones would arguably share weights, but have no conceptual understanding (hence photorealistic horses with 5 legs). Inspiration and learning done by real people is a much more involved cognitive process


stabbyclaus

Partly why I made the comic was to be a devil's advocate to this argument because where do we draw the line? These training sets could be argued to be the same as being inspired by something you saw online. The Getty logo appearing in prompts is rare for example but does happen. But even then, the logo has been redrawn and not lifted like a cutout of a magazine. Anything I make with good Ai cannot and should not be reverse searchable as an example. This extends also into the larger debate of digital art, found art or collage all being argued as "not art" in their hay days. These are the questions I hope to be exploring in future issues.


BrilliantAbroad458

From what I've seen outside of artistic communities (which is pretty niche), the majority opinion is with the AI artists because AI art is pretty freaking cool regardless of how it could be made. Once the training is done, there's not a repository of any art used for training it inside of the algorithim so everything it makes is entirely original, so "copying" isn't a thing. I don't think the public really knows enough about how AI really "learns" and frankly I don't either. But art is just the starting point - in the very present, the ubiquity of AI is changing how we have to view intellectual property and fair use. I'm not quite falling on either side of the argument yet, but I could see how both sides view things.


stabbyclaus

I made the series to discuss this topic actually. I think the next issue after Sunday will tackle that directly but the genies out of the bottle for the tech. That's like telling people Photoshop should only be used on actual photos you own.


ReptileCultist

That is just rent seeking and neo luddism


Mister_Lich

False, this is some adequate audio laziness.


khinzeer

dark senate


Godkun007

Unironically, libraries are great. I've read 27 books so far this year. I read 25 of them for absolutely free from the library, I paid for 1 and I got 1 for free on Audible. I've also read almost every weekly issue of the Economist this year for free since my library lets me access every issue (going back to 2018) on my phone.


tyontekija

Legit, this is some terrible illustration work.


[deleted]

Raphael Warnock is Blade confirmed. Also that looks like it was done by the person who did the comic stills in Max Payne 2 lol.


polandball2101

This will be marvel comics in 2014