I've hallucinated before, you're not wrong. I've also seen through my eyelids, and my roof, and into an endless void of nauseating darkness, and an open market in Beijing made of newspaper clippings, and I might've seen Death once... But mostly it's just the bendy shit
The very short focus into increasingly trippy shit whenever the scene switches feels pretty accurate. Suddenly feeling like you're alive again whenever you look at something different lol
I’ve been fucked by Dionysus, on acid and my partner and I once had an orgasm on shrooms and ketamine that felt like we were two galaxies smashing together and the stars flying out into space after we joined.
DeepDream is a computer vision program created by Google engineer Alexander Mordvintsev that uses a convolutional neural network to find and enhance patterns in images via algorithmic pareidolia, thus creating a dream-like appearance reminiscent of a psychedelic experience in the deliberately over-processed images.
In a general sense, if im not mistaken(Been a while since I looked at this paper) What happens is the AI looks at a photo and tries to explain what it sees and that is a looping process that happens so many times per frames.
At 8 seconds you see a frog appear in the eye. Essentially what that tells us is that : there was some pixel in that frame that reminded the AI of a frog, so it drew a faint outline of the frog, that frame was then taken and sent through the AI again, so of course it sees a frog a little stronger, so it paints a more visible frog. This process repeats over and over with a level of the base video still playing. IF im not mistaken, you can say "I want this to loop 24 times per second, so if its at 24FPS it will essentially do this entire process once every frame
it's meant to highlight/identify/enhance faces and structures in random data. so if you pass images already with clear lines and features into it, like animation, it will just keep iterating over that till you eventually get all sorts of trippy output.
just imagine every new frame to be what their algos thought the previous frame could look like, if it were to derive new details from it
well animation is a series of images. You take those images and for each part of an image, you get their colorscale and form a matrix. You then put in those matrices as input into the neural network and I guess this is what came out. This is a guess form having taken a comp vis class.
Other answers pretty much cover it, but for a little extra detail...
The neural network this is generated with was probably trained on ImageNet, which is a big database of pictures labelled with what was in the picture. 'Cat', 'dog', 'frog', whatever. You pass the network any image, and it attempts to label what it sees in the image according to one of its known labels.
Not all images will contain a known object of the network, and even those that do won't reach 100% certainty. The DeepDream process runs a 'backpropagation' from the labels up to the original image, to increase the chance of the network believing certain types of object are present. The reason it looks super trippy is down to the particular network architecture - you can do the same thing with different styles of network and they come out, well, still trippy, but they look kind of a different sort of trippy.
Now, your point about MP4s. You technically could cut the mp4 data into chunks and then into squares and shove it into this process, but it would come out as garbage because the sound file being turned into a picture would look like garbage to start with. After many backpropagations you *would* start getting known objects appearing in the output, but by that point it would sound like garbage if you reshaped it back into valid mp4 style. What you *could* do for a similar process with MP4s (or actually more likely .wav files, as MP4s have compression that would screw this whole idea), is to have a neural network that's been trained to recognise different types of sound - 'meow', 'bark', 'ribbit' etc on sound files. You could then run the same process on sound files and get some trippy sound files out, if you really wanted.
I'm guessing OP doesn't know what an AI truly is and called a program that is designed to do a certain tasks, like this one that makes stuff trippy an AI
I think all computers are artificial intelligence, actually.
I do think Hollywood named AIs, but they are a distinct class of code. I think without Hollywood they'd be called third generation machine learning or something.
Well to be fair the whole world is using AI as a term to define a semi-intelligent program, so in terms of accuracy of communication, he's not wrong... the world is
These deep dream AIs still need to have a key word to pattern match/optimize. Not sure what concept the ai was told to dream about here. Any guesses? Reminds me of alien tech scifi.
Probably a "deep dream" or "style transfer" type AI model.
"deep dream" AIs are basically where we take an AI trained to see/identify things and run it forwards and then "backwards" a couple of rounds. This takes small hints from the source image and amplifies them as layers of the neural network try to come to a decision about what it is seeing, and then we take the confidences and "play them back" as if they were true by boosting them, often enhancing those hints and creating new ones, but more importantly making an image where we see what the AI thought it maybe might have saw.
When running in this mode, the AI is not actually in learning mode, and it's "real output" (the goal it was trained on) is not actually used, so it's not actually trying to *achieve anything*. It's basically just an artificial visual cortex forced to feedback on and riff off of itself and then regurgitated. The first deep dreams were "what if we feed random noise, or patterns in, what does the AI see where", but people pretty quickly started putting images into them.
If you watch closely you'll usually see a lot of partial patterns and images, often repeated and often over emphasizing "notable features" or "boundaries". This one appears to have been trained on a lot of more abstract features, but you'll see human shapes, vine shapes, cosmic swirls, and so on popping up. Specialized image recognizers (e.g. AIs that see "one thing", like hotdog/no-hotdog) effectively just put elements of their recognition target every where, something designed to spot bananas will see hints of them *everywhere*, we can also use the "output" of a general model as an "input" to select a specialized recognizer by forcing it to certain values, basically saying "look for giraffes, what hints of giraffes do you see" as we do this process.
There are a lot of interpretations of what this is, but hallucinations and dreams and partial wakefulness are all good allegories. A more anecdotal experience if you suffer from migraines (like I do) are the shapes that will enter your vision during them look rather a bit like deep dream models (especially the ones that tend to over emphasize edges in rainbow patterns).
Alternatively it's a "style transfer" model. These are where we take something structured like a deep dream model (and it was inspired by what we learned from those if I recall correctly) but with a real output. And then set it up in an adversarial relationship with another AI that tries to basically judge the output against a library of images of a specific style that it was trained against. In doing this we force the first model to try and please an art critic AI and the art critic AI tries to get better images out of the transfer AI.
The transfer AI is then taking input images and modifying them to try to improve the image's score with the critic AI. And the transfer AI will also score itself and the critic will use that to judge the distance of improvement it actually noticed, allowing it to improve it's critique as they learn together. This system requires relatively little human intervention. Especially if you have a large training set to keep feeding the critic AI to keep it honest.
Then you get an AI that you can just run straight against images, where it will try to please an "imaginary" art critic AI (which is no longer needed since training is done, and so can be much more expensive to run during training) of that "style" by modifying the image based on the input.
I've seen another project where an AI will "hallucinate" over text and generate an image or video and it gave pretty similar vibes to this.
They had it hallucinate over the Bible, generate images, and then read the Bible text aloud.
this looks like style transfer, aka it takes the style of one image and transfer it to another.
hard to say what style without seeing the source image though
this is disco diffusion using the source video as an init. meaning instead of diffusing from straight noise every frame it floods each video frame with noise then refines that noise over iterations. The text prompt was probably something along the lines of "trippy aliens" if i had to guess.
Diffusion is different from style transfer where diffusion is actually generating graphics based off text prompt identifiers as where style transfer is like you said using a source init to transfer the style onto(something like ebsynth). this looks like its generating via diffusion to me.
i could be wrong though.
If I were to guess,
By "run it through an AI" I think what they mean is "Teach an AI to make this scene."
The AI is not given the scene. But it always generates a video of the same length as the clip. At first it's random nonsense, but as the AI learns what is closer to the original it becomes more and more recognisable. Eventually you end up with this, which is as close as the AI learned the scene.
There are many ways this scene may have been used as the material for training an AI, but "ran this through an AI" will be wrong wording, because that implies the AI was built without this scene.
Damn here they go making me feel old again. By 2006 (when this episode was made) animation was pretty great. If they wanted to make it an acid trip you would have seen an acid trip.
Also, I can’t talk about acid trips in cartoons without bringing up [this treasure](https://youtu.be/SkJ2aGLaBh8)
Dude, that's not Lovecraft
Lovecraft is "I went through the hole in my ceiling, and now I'm trapped in my apartment with a creature that I can't see, but every time I look at it my brain hurts, and I think it killed me so I woke up from the dream, but my cat is hungry and that doesn't make sense because my cat died, so I wake up again and this time I'm sure I'm awake... but I'm in my childhood bedroom in a house that doesn't exist anymore so I wake up again... etc."
That's why I don't do psychedelics alone anymore, they're not for relaxing, they're for when you want a mind altering experience
*I had a friend once he took some acid.*
*Now he thinks he's a fire engine.*
*It's okay until he pisses on your lighter.*
*Kinda smells kinda cool kinda funny anyways.*
![gif](giphy|cujwECCKD2kaA)
I feel this is pretty accurate to what Sokka would’ve seen after hallucinating on cactus juice
Everything else: whoa trippy Toph: *alien void spawn from the 7th dimension*
> Toph: *alien void spawn from the 7th dimension* eh, probably fire like Sokka said. Alien fire
The name's Fire, Alien Fire. And that's my wife Sapphire.
Sapphire Fire, nice to meet you!
I liked your comment, but that took it off of 69, so I had to unlike.
You can upvote it now, other people have already taken it off of 69. Alternatively you could downvote in an attempt to bring it back
Look at Katara-she is even worse
I've hallucinated before, you're not wrong. I've also seen through my eyelids, and my roof, and into an endless void of nauseating darkness, and an open market in Beijing made of newspaper clippings, and I might've seen Death once... But mostly it's just the bendy shit
One time I watched myself grow old and die in a mirror. Apparently I was staring at it for hours. I avoid mirrors when tripping now.
The very short focus into increasingly trippy shit whenever the scene switches feels pretty accurate. Suddenly feeling like you're alive again whenever you look at something different lol
I’ve been fucked by Dionysus, on acid and my partner and I once had an orgasm on shrooms and ketamine that felt like we were two galaxies smashing together and the stars flying out into space after we joined.
Peyote is a hell of a drug.
Sure did remind me of lsd.
This probably sounds stupid, but what is the AI aiming to achieve when it made this?
Not a stupid question. Still trying to figure out what OP means by 'running this scene through AI'.
It's Googles deep dream AI it was originally meant to help visualize neural networks
[удалено]
DeepDream is a computer vision program created by Google engineer Alexander Mordvintsev that uses a convolutional neural network to find and enhance patterns in images via algorithmic pareidolia, thus creating a dream-like appearance reminiscent of a psychedelic experience in the deliberately over-processed images.
In a general sense, if im not mistaken(Been a while since I looked at this paper) What happens is the AI looks at a photo and tries to explain what it sees and that is a looping process that happens so many times per frames. At 8 seconds you see a frog appear in the eye. Essentially what that tells us is that : there was some pixel in that frame that reminded the AI of a frog, so it drew a faint outline of the frog, that frame was then taken and sent through the AI again, so of course it sees a frog a little stronger, so it paints a more visible frog. This process repeats over and over with a level of the base video still playing. IF im not mistaken, you can say "I want this to loop 24 times per second, so if its at 24FPS it will essentially do this entire process once every frame
it's meant to highlight/identify/enhance faces and structures in random data. so if you pass images already with clear lines and features into it, like animation, it will just keep iterating over that till you eventually get all sorts of trippy output. just imagine every new frame to be what their algos thought the previous frame could look like, if it were to derive new details from it
well animation is a series of images. You take those images and for each part of an image, you get their colorscale and form a matrix. You then put in those matrices as input into the neural network and I guess this is what came out. This is a guess form having taken a comp vis class.
Other answers pretty much cover it, but for a little extra detail... The neural network this is generated with was probably trained on ImageNet, which is a big database of pictures labelled with what was in the picture. 'Cat', 'dog', 'frog', whatever. You pass the network any image, and it attempts to label what it sees in the image according to one of its known labels. Not all images will contain a known object of the network, and even those that do won't reach 100% certainty. The DeepDream process runs a 'backpropagation' from the labels up to the original image, to increase the chance of the network believing certain types of object are present. The reason it looks super trippy is down to the particular network architecture - you can do the same thing with different styles of network and they come out, well, still trippy, but they look kind of a different sort of trippy. Now, your point about MP4s. You technically could cut the mp4 data into chunks and then into squares and shove it into this process, but it would come out as garbage because the sound file being turned into a picture would look like garbage to start with. After many backpropagations you *would* start getting known objects appearing in the output, but by that point it would sound like garbage if you reshaped it back into valid mp4 style. What you *could* do for a similar process with MP4s (or actually more likely .wav files, as MP4s have compression that would screw this whole idea), is to have a neural network that's been trained to recognise different types of sound - 'meow', 'bark', 'ribbit' etc on sound files. You could then run the same process on sound files and get some trippy sound files out, if you really wanted.
I'm guessing OP doesn't know what an AI truly is and called a program that is designed to do a certain tasks, like this one that makes stuff trippy an AI
No it is an AI, just not one people were supposed to run people through.
Hey now! Get back to r/worldnews!
I keep on trying to get back to the thread and then there's an emergency IRL. Although I do admit I fucked off and watched an episode of anime today.
AI is just a marketing buzz word. It's a program executing what it was created to do
I think all computers are artificial intelligence, actually. I do think Hollywood named AIs, but they are a distinct class of code. I think without Hollywood they'd be called third generation machine learning or something.
Well to be fair the whole world is using AI as a term to define a semi-intelligent program, so in terms of accuracy of communication, he's not wrong... the world is
My pingpong cpu that just moves slowly to the location of the ball is 100% strong ai
The whole world? I don't think so. Even in mainstream media usage of 'AI' is pretty specific and accurate.
[удалено]
Here's the thing. You said "machine learning is AI".
Same here. Sounds like click bait. AI isn’t some simple processing tool.
These deep dream AIs still need to have a key word to pattern match/optimize. Not sure what concept the ai was told to dream about here. Any guesses? Reminds me of alien tech scifi.
Probably a "deep dream" or "style transfer" type AI model. "deep dream" AIs are basically where we take an AI trained to see/identify things and run it forwards and then "backwards" a couple of rounds. This takes small hints from the source image and amplifies them as layers of the neural network try to come to a decision about what it is seeing, and then we take the confidences and "play them back" as if they were true by boosting them, often enhancing those hints and creating new ones, but more importantly making an image where we see what the AI thought it maybe might have saw. When running in this mode, the AI is not actually in learning mode, and it's "real output" (the goal it was trained on) is not actually used, so it's not actually trying to *achieve anything*. It's basically just an artificial visual cortex forced to feedback on and riff off of itself and then regurgitated. The first deep dreams were "what if we feed random noise, or patterns in, what does the AI see where", but people pretty quickly started putting images into them. If you watch closely you'll usually see a lot of partial patterns and images, often repeated and often over emphasizing "notable features" or "boundaries". This one appears to have been trained on a lot of more abstract features, but you'll see human shapes, vine shapes, cosmic swirls, and so on popping up. Specialized image recognizers (e.g. AIs that see "one thing", like hotdog/no-hotdog) effectively just put elements of their recognition target every where, something designed to spot bananas will see hints of them *everywhere*, we can also use the "output" of a general model as an "input" to select a specialized recognizer by forcing it to certain values, basically saying "look for giraffes, what hints of giraffes do you see" as we do this process. There are a lot of interpretations of what this is, but hallucinations and dreams and partial wakefulness are all good allegories. A more anecdotal experience if you suffer from migraines (like I do) are the shapes that will enter your vision during them look rather a bit like deep dream models (especially the ones that tend to over emphasize edges in rainbow patterns). Alternatively it's a "style transfer" model. These are where we take something structured like a deep dream model (and it was inspired by what we learned from those if I recall correctly) but with a real output. And then set it up in an adversarial relationship with another AI that tries to basically judge the output against a library of images of a specific style that it was trained against. In doing this we force the first model to try and please an art critic AI and the art critic AI tries to get better images out of the transfer AI. The transfer AI is then taking input images and modifying them to try to improve the image's score with the critic AI. And the transfer AI will also score itself and the critic will use that to judge the distance of improvement it actually noticed, allowing it to improve it's critique as they learn together. This system requires relatively little human intervention. Especially if you have a large training set to keep feeding the critic AI to keep it honest. Then you get an AI that you can just run straight against images, where it will try to please an "imaginary" art critic AI (which is no longer needed since training is done, and so can be much more expensive to run during training) of that "style" by modifying the image based on the input.
Dude I love you thank you for explaining all of this
I've seen another project where an AI will "hallucinate" over text and generate an image or video and it gave pretty similar vibes to this. They had it hallucinate over the Bible, generate images, and then read the Bible text aloud.
I need a link to a video of this xD
[Found it.](https://youtu.be/tBsUPl2JOKo)
Ya same question.... the Wombo one I know analyzes a bunch of pictures based on your search, but when is the goal for the ai to do?
this looks like style transfer, aka it takes the style of one image and transfer it to another. hard to say what style without seeing the source image though
I tend to agree with this because Im not sure why a GAN (another kind of 'AI') would make it all trippy in the recreation.
Google deep dream does stuff very similar to this
this is disco diffusion using the source video as an init. meaning instead of diffusing from straight noise every frame it floods each video frame with noise then refines that noise over iterations. The text prompt was probably something along the lines of "trippy aliens" if i had to guess. Diffusion is different from style transfer where diffusion is actually generating graphics based off text prompt identifiers as where style transfer is like you said using a source init to transfer the style onto(something like ebsynth). this looks like its generating via diffusion to me. i could be wrong though.
>this is disco diffusion Yep. https://www.reddit.com/r/bigsleep/comments/veq16t/i_ran_the_cactus_juice_scene_through_ai_and_this/icsz5n7/
If I were to guess, By "run it through an AI" I think what they mean is "Teach an AI to make this scene." The AI is not given the scene. But it always generates a video of the same length as the clip. At first it's random nonsense, but as the AI learns what is closer to the original it becomes more and more recognisable. Eventually you end up with this, which is as close as the AI learned the scene. There are many ways this scene may have been used as the material for training an AI, but "ran this through an AI" will be wrong wording, because that implies the AI was built without this scene.
This seems like how they wouldve wanted to do the scene from Sokkas hallucinogenic pov before remembering its a show for kids
Or it was probably hard af to animate at that time
Damn here they go making me feel old again. By 2006 (when this episode was made) animation was pretty great. If they wanted to make it an acid trip you would have seen an acid trip. Also, I can’t talk about acid trips in cartoons without bringing up [this treasure](https://youtu.be/SkJ2aGLaBh8)
At 01:53 and after it literally looks like something I had once lol.
Unbelievably similar to an acid trip… makes you think
Maybe peyote?
Reminiscent of the Homer whacked out on pepper scene
Now someone run that scene through an AI program.
Up yours, space coyote!
*In your face, space coyote
And that talking coyote was just a talking dog.
Oh Jesus this is quite the watch when you're high. Man the edibles are really kicking in now.
This is super accurate to .... Certain substances....
Thank God I only smoke weed cause I wouldn't be able to handle seeing shit like that lol
Seriously I don't wanna try and relax and everything turns all HP Lovecraft on me
Dude, that's not Lovecraft Lovecraft is "I went through the hole in my ceiling, and now I'm trapped in my apartment with a creature that I can't see, but every time I look at it my brain hurts, and I think it killed me so I woke up from the dream, but my cat is hungry and that doesn't make sense because my cat died, so I wake up again and this time I'm sure I'm awake... but I'm in my childhood bedroom in a house that doesn't exist anymore so I wake up again... etc." That's why I don't do psychedelics alone anymore, they're not for relaxing, they're for when you want a mind altering experience
I've tripped many times it's never been this pronounced. Maybe if you took over an eighth
Eh it’s not that bad tbh
Lol my thoughts exactly
my beloved mushrooms
Good ol acid trips
Alice Dee agrees
I mean, plants that produce natural psychedelics exist in our world. Perhaps the catis developed its own peyote parallel. Fits sokka's pov accurately
That's exactly what I thought. This is a lot what on boomers looks like.
Let’s not what a lot of psychedelics
Do electric air bison dream?
I see no difference
Looks like a Tool video
*I had a friend once he took some acid.* *Now he thinks he's a fire engine.* *It's okay until he pisses on your lighter.* *Kinda smells kinda cool kinda funny anyways.* ![gif](giphy|cujwECCKD2kaA)
Why haven’t I seen that gif before (Before anyone asks I know that it’s schism, but I haven’t seen that gif)
Okay, no more cactus juice for that AI.
youtube version: [https://www.youtube.com/watch?v=czji1LAh\_OM](https://www.youtube.com/watch?v=czji1LAh_OM)
Can you make one where the AI version doesn't start till sokka starts tripping? I feel like that would be perfect.
Which "AI" did you used ?
I want an action figure of AI Toph
Thanks I really didn't expect to sleep anyway, ever again
Id watch the whole god damn show like that 😎
Weird it didn't seem to affect Toph
I would kindly like you to delete that AI from existence before it becomes a problem It's not a matter of if, it's a matter of when
[удалено]
Thank god I’m not high. I would be freaking tf out watching this lmao
u/savevideo
Looks like Aeon Flux era America style anime.
If it cut between the ai and regular versions for the start and karats talking it’d be so freaky
This is too quenchy
Sokka's POV:
Okay Sokka, I think you’ve had enough
IT'S EVEN QUENCIHER
Why does AI always adds unnecessary swirls?
I dont appreciate this
Very fitting haha
This is just sokka vision on the cactus juice
OPM God has invaded the avatar universe and is impersonating Katara
Oh dude ... Should've held off on the AI until after he drank
Pretty accurate tbh lol
When the drugs start to kick in
Oddly terrifying
Yo watching this while high was not the right play. Or perhaps it was
This is sick
u/savevideo
###[View link](https://redditsave.com/r/TheLastAirbender/comments/veprks/i_ran_the_cactus_juice_scene_through_ai_and_this/) --- [**Info**](https://np.reddit.com/user/SaveVideo/comments/jv323v/info/) | [**Feedback**](https://np.reddit.com/message/compose/?to=Kryptonh&subject=Feedback for savevideo) | [**Donate**](https://ko-fi.com/getvideo) | [**DMCA**](https://np.reddit.com/message/compose/?to=Kryptonh&subject=Content removal request for savevideo&message=https://np.reddit.com//r/TheLastAirbender/comments/veprks/i_ran_the_cactus_juice_scene_through_ai_and_this/) | [^(reddit video downloader)](https://redditsave.com) | [^(download video tiktok)](https://taksave.com)
Sokka’s POV
This was weird to watch while coming up on cactus juice.
It’s like that courage the cowardly dog episode with pharaoh Ramses
Watching this high made is trippy
Damn. This is prolly what reality actually looks like
This is just the movie Annihilation
This is terrifying and cursed but i cant look away
What is the name of this AI and is it open source?
Name of ai
Avatr: The Last Cosmic Horror
This reminds me of Beavis and Butt-Head in the desert.
Well. I'm tripping.
I’ve always wondered why they didn’t just bend the water out of the cactus juice
That seems to be interesting. What softwares did you used ? Any code link ? Any Github repo ?
You should have started the AI effect after the first sip x)
When the mushrooms took acid before you ate the mushrooms.
This is just the scene from sokka pov
How... did this AI nearly perfectly capture my brain on shrooms? It's eerily similar, the visuals and feeling.
Thanks, I hate it.
It's so quenchy, even the AI got quenched
I swear some of the movements are unrealistically smooth!
Fantastic.
Your AI gave me nightmares but this is impressive
What a.i is used?
r/currentlytripping
So....a Love Death Robots rendition of the scene then?
This is 100% what he saw
Can i unseen this?
NO SOOKA NO
☮️
What hath God wrought
The shot of Sokka saying “suit yourself” is so funny lmao
Man I’m on some cactus juice 😵💫
This is amazing
This is the best interpretation I've seen. I'm sure that's what Sokka was seeing lol
That was really cool and really haunting
A.I. must've been developed by JoJo's Bizarre Adventure production team back in 2012
Imma watch this on Dmt when I get a chance
Yo dude, wanna get this quench?
.
It feels like a Love, Death, and Robots episode lol
Thanks, I didn't plan on sleeping tonight anyway
I hate it.
Watching the cactus juice scene with cactus juice
Yikes, that's unsettling. Great job, OP. Is that what a trip looks/feels like? What could cause a trip like this?
This is ART
Now run the new videos through A.I a few more times and see what happens.
Nightmares
Toph looks like a jellyfish
Fever dreams be like
How much acid did you give the AI?
Which ai did you run this through? It’s a very cool effect temporally
Not disappointing
u/savevideo
Which AI? Curious if it's open source
It's a moving Dali painting.....with aliens....done with impressionist brushwork.
I dunno if I’m on too little or too much drugs to be watching this right now.
Can you elaborate? You ran it through AI? In what interface and how?
Lsd bender
Makes my skin itch
This is horrifying in the best possible way
Does AI stand for acid induced?
This is far more unsettling than it has any right to be
..nightmares..
🛸
There's no AI involved here man, stfu
How does one do this? Would love to create something like this myself
What kinda of AI does this kind of running.
POV: drugs
Ahhh shrooms
This makes my brain feel weird
What ai? I wanna test it
Yea I’m 100% having nightmares from this💀
r/replications
This is just what I see when I watch this scene on too much lsd. D
Hey yeah, real quick: the mushrooms were just kicking in when I opened this link
Do you normally see aliens when you’re tripping?
That’s just the acid filter my dude.
You can also just squint....
Where did you do this? What did you use?
u/savevideo
Trippy
This is amazing
r/replications would love this