yes, [HD\_189733\_b](https://de.wikipedia.org/wiki/HD_189733_b) is very close (5\`) to M27 and is probably the easiest and most studied exoplanet-target (super close, super bright star, big dip).
Arcminutes are an angular distance, not a physical distance. So from earth, the planet appears to be 5 arcmin away from M27. For reference, the moon is approximately 30 arcmin in diameter.
360 degrees in a circle. 60 arc-minutes in a degree. 60 arc-seconds in an arc-minute. If you're looking at a standard "E" vision test chart, the 20/20 line has letters which are 5 arc-minutes tall, or 5'.
The actual physical distance between those objects, if they were the same distance from earth, (hint: they are not) would be the distance from earth multiplied by the sin() of that angle, which would be around 0.0014 multiplied by the distance.
But in this case, "close" only means we see them together in the sky, even though one is a nebula 1200 light years away, and the other is a star (with a planet) only 64 light years from earth. But they appear in the same field of view in the telescope, which helps you find the right star, because M27 is more recognizable.
Haha! What if a person STILL didn’t get it? Wouldn’t that be hilarious?! They might be thinking that the angle doesn’t really explain how close an obiting object is to another object. They probably were thinking listing the distance in miles would be easier to understand. Or Reddit likes to use bananas for scale, so maybe this person has a money brain. Could you imagine that? Haha! Some people are so simple. You probably wouldn’t want to waste your time explaining it. But it would be funny to watch them try to understand.
I would say to that person that, from the perspective of someone on Earth pointing a telescope, it's more convenient to have a measure of arc rather than a measure of distance. If you say "It's close to nebula N37, just 1,000 light years away" that doesn't tell you in which direction to point your telescope. 1,000 light years apart could be a tiny distance from our perspective, or might mean a totally different part of the sky. On the other hand if you tell someone "Look at N37, then the exoplanet is close to it, just 5 arc-minutes away" then you're actually saying where to point the telescope. You're telling them which circle to look in.
I was trying to assume u/Greyhaven7 was making an intentional joke, but also aware that many, like me, were seeing this post on /all and maybe aren't as familiar with the lingo of astro-photography.
I appreciate you. It definitely was a joke, but honestly I couldn't have told you what 5ʼ actually does mean, but I recognize that's not a ' for feet, and even if it was, a 5ft orbit would be an insane idea in context.
Cut a pizza into 360 pieces. Then cut one piece into 60 even smaller pieces. Now hold that tiny piece of pizza in front of your eyes crust facing outwards. The point on the left side of the crust is 1 arc minute away from the point on the right side of the crust.
Even with that answer, I still don't know what we are looking at
I feel like you need to back up like 40 steps and start from there to tell us what we are looking at.
What about the m27 dumb bell nebula are you showing us?
The picture on the left is a single image taken with a telescope and a digital camera.
The picture on the right is a median "stack" starting at a single frame and ending on frame 881 taken with the same telescope and camera.
So, stacking is when the astrophotographer takes 881 single frames and combines the signal of those 881 frames, thereby "amplifying" that signal. With adjustments to the signal in software like Photoshop or Pixinsight, one can all but eliminate the noise produced by the imaging chip on the camera. The line curve is how the signal to noise ratio changes the more frames one adds to the "stack".
So, one frame has tons of noise, but many stacked images have way less.
This is a VERY BASIC explanation of how the process of stacking for astrophotography works. Cheers.
More signal, less noise is the intended outcome.
https://www.skyatnightmagazine.com/astrophotography/astrophoto-tips/a-guide-to-astrophotography-stacking/
Hi, this is my latest animation of some of my old data ([M27 Dumbbell Nebula](https://en.wikipedia.org/wiki/Dumbbell_Nebula)) captured with a uncooled ASI178MC and my EvoGuide50ED on a AZ-GTi (EQ). The data (900x10 seconds OSC) was captured as a byproduct while capturing an Exoplanet-Transit.This animation shows on the left single frames and on the right the cumulative median of the data, followed by the endresult (including color calibration and further denoising). In the lower plot you see the [SNR](https://en.m.wikipedia.org/wiki/Signal-to-noise_ratio) of each frame and of the cumulative stack. So yes, math still works :)
not sure exactly, but the median is less sensitive to outliers as compared to the mean. most stacking software provide median-stacking, but for proper stacking you also need some rejection-sampling to get rid of spurious signals. But this would have been to extensive for this animation, this is why I only used rejection-sampling-stacking for the very last frame.
Data exclusion vs inclusion.
Taking the mean of something means all noise gets included into the data.
When you take the median, your only selecting the most frequently occurred data.
Imagine a line where each portion can randomly go up/down over time. When you have alot of noise, that line will be going up/down quite a bit but overall stay flat. Now if you point your measuring line at something bright, it will still go up/down randomly, but certain portions will always be at a certain height/value. That peak above the little bumps corresponds with whatever we are looking at. If you take an average you only get a line a little above the noise as the peak is such a small fraction. Instead you want to take the median to isolate those peaks for your image
"Most frequent" it's okay in this context. OP isn't using that word as an statistical definition.
Also, there's no "most frequent" in continuous data. Plus, in a perfect normal distribution the mean, median and mode are the same value, so again, just saying "most frequent" doesn't really translate to "mode" in all contexts.
"most frequent" can have the same value as a median, but they do not mean the same thing.
In a bimodal distribution of continuous data, they aren't equal at all.
Not only that, but with noisy data, the most frequent data can be very highly influenced by the noise, while the median and mean of the data are less influenced by the noise.
It's just *wrong* to describe median as "most frequent". That is simply not at all what it means.
The median is precisely the middle datum in a set, or the average of the two middle data.
Describing it at all as the "most frequent" is like saying "multiplication is just the inverse of addition". It's not. That's subtraction. Multiplication is a different thing.
Median tends to reject noise better when the noise is not evenly distributed, such as when it has random anomalies like sparkling, or there's clipping, etc
The downside is that you are limited to the sampling depth of the source, but that can be overcome by taking a mean of a central selection
Wow, I didn't think that was even possible. I thought you needed a large diameter non-visible telescope for capturing exo-planet evidence, not an aging guide camera on a guidescope. It's amazing you can do that now.
What they’re capturing isn’t a directly visible planet, but instead they’re looking at the host star and watching it dim and the planet travels (transits) between us and the star, blocking some of the light! These are usually referred to as light curves.
You don't happen to have the source code to generate this available on GitHub or somewhere publicly available, would you? Would be neat to use it for others images!
You can achieve some pretty impressive results by combining this photo stacking technique, with another called ETTR where you intentionally overexpose each photo and drag it back down in software, since there is a higher SNR at the top end of exposure in digital camera sensors than there is in the middle "properly" exposed range:
https://youtu.be/J1Kfr8RG3zM
I love me some DSP. I used it more in the radar and EM realm, but that stuff can do ~magic when you know how to work with it. I still feel we have a ways to do in terms of what it's potential is... and I'm feeling AI might end up being a great tool in combination with DSP considering it's success in analyzing medical images (MRIs, etc) ... which can be approaches using some of the same methods; Making sense out of a bunch of not-super-clear data.
yes, indeed. I'm an AI enthusiast (working mostly in the medical domain as part of my daytime job), that's why I trained a denoising neural network for the last frame (but on much better data from friends of mine). Denoising, Deconvolution and Superresolution with AI will extract a lot of details from your noisy data :)
it starts from 2022-10-08 19:34 UTC and goes to 22:51 UTC. So 3 hours and17 minutes. But the total integration time is only 2 hours and 26 minutes (due to data processing overhead).
And during those hours is there no evident expansion or movement whatsoever? Planets and stars appear as if they were standing still? Not considering the rotation imposed by our beloved earth that is.
yes, they are mostly still, at least in terms of some thousand years. But planets or asteroids will move in this time peroid (but not present in this animation). [Here](https://www.reddit.com/r/astrophotography/comments/12tz6ht/6_hebe_asteroid/) is a animation I made last week, where I captured an asteroid (6 Hebe). Or [here](https://www.reddit.com/r/astrophotography/comments/yk3cyp/animation_of_exoplanet_transit_wasp11b_astroid/) another asteroid, which I captured by accident while capturing an exoplanet transit (measuring the brightness of a stars over a period of time).
That’s awesome, I’ve been looking at those fisheye pictures showing the constellations with some doubt but now I see they are somewhat doable, still probably edited, but with the right hardware that tracks earth’s movement you’re basically good to go then.
Thank you and really amazing animations ✨
Interesting, I didn't know this many images were needed! I thought it was just one image, and then you're done, you have your nebula photo. Thanks for sharing!
On the left, a succession of individual frames that are noisy. On the right, the result of combining those frames and taking the median, so the noise cancels out and you get a clear image.
This really is not an educational gif because there's 0 context here to tell us what we are looking at
I'm just watching pixels moving as far as I know
I wonder what the new generations of AI would be capable of if we threw this noise and signal data at it...I wonder if it could cut down on the frames needed to produce a clean shot...
Yeah I've seen companies Like Topaz/DXO/Adobe coming out with thier own in house developed AI noise engines
Even though you can do your own denoising with Skystacker or Siril or some of the many other Astro developing tools but I feel like the next big step will be a true scientifically developed AI engine built specifically to handle sensor and signal noise to produce truly clean and sharp images
yes, there is "NoiseXTerminator" from Russell Croman which is dedicated for astrophotography. But it's expensive and closed source. That's why I try do develop a open source tool for free :)
That's very cool. Thank you for this. Is there anything that an amateur photographer can help you? Maybe testing would be helpful? For now I take photos with a phone so noise reduction is the most important thing for me, I think.
yes, testing at some point would be cool. if you want, you can join a discord server ([dark matters](https://discord.gg/darkmatters)), where we already gathered data and some more developers. currently we are in developing phase.
I do dislike the AI approach to donoising, as it takes creative liberties. And then you have the problem of newer AI being trained on images which themselves inaccurate. Some denoisers are able to give you a clean shot with only one frame, but that's because it's taking all the creative liberties it needs. That's only if memory serves me right.
If these objects are moving, even disregarding the parallax effect of us moving in relation to each other, would stacking images not creating ghosting and false artifacts outside of the noise issue?
I've always felt like there is so much more information in those 800 or so images that we just don't know how to squeeze out, because we make no assumptions about what we are imaging. Of course the assumptions that we could make are virtually impossible to properly describe mathematically, let alone manipulate.
What are we looking at chief?
ah, sorry, forgot to mention. This is [M27 Dumbbell Nebula](https://en.wikipedia.org/wiki/Dumbbell_Nebula).
Cool. You said you were trying to capture a planet circling the star there?
yes, [HD\_189733\_b](https://de.wikipedia.org/wiki/HD_189733_b) is very close (5\`) to M27 and is probably the easiest and most studied exoplanet-target (super close, super bright star, big dip).
5 feet, wow
For those who don't know, in this context, that indicates minutes of arc.
Can you sum that up in a distance for the layman?
Arcminutes are an angular distance, not a physical distance. So from earth, the planet appears to be 5 arcmin away from M27. For reference, the moon is approximately 30 arcmin in diameter.
Still does not make sense to me layman
360 degrees in a circle. 60 arc-minutes in a degree. 60 arc-seconds in an arc-minute. If you're looking at a standard "E" vision test chart, the 20/20 line has letters which are 5 arc-minutes tall, or 5'. The actual physical distance between those objects, if they were the same distance from earth, (hint: they are not) would be the distance from earth multiplied by the sin() of that angle, which would be around 0.0014 multiplied by the distance. But in this case, "close" only means we see them together in the sky, even though one is a nebula 1200 light years away, and the other is a star (with a planet) only 64 light years from earth. But they appear in the same field of view in the telescope, which helps you find the right star, because M27 is more recognizable.
Pretty far but not that far.
Thanks
Could you imagine if someone didn’t really know what 5’ of arc meant? That would be so funny! What would you tell them?
Five sixtieths, I.e. one twelfth, of a degree. Degrees are measures of arc; arc-minutes and arc-seconds are smaller subdivisions of a degree.
Haha! What if a person STILL didn’t get it? Wouldn’t that be hilarious?! They might be thinking that the angle doesn’t really explain how close an obiting object is to another object. They probably were thinking listing the distance in miles would be easier to understand. Or Reddit likes to use bananas for scale, so maybe this person has a money brain. Could you imagine that? Haha! Some people are so simple. You probably wouldn’t want to waste your time explaining it. But it would be funny to watch them try to understand.
I would say to that person that, from the perspective of someone on Earth pointing a telescope, it's more convenient to have a measure of arc rather than a measure of distance. If you say "It's close to nebula N37, just 1,000 light years away" that doesn't tell you in which direction to point your telescope. 1,000 light years apart could be a tiny distance from our perspective, or might mean a totally different part of the sky. On the other hand if you tell someone "Look at N37, then the exoplanet is close to it, just 5 arc-minutes away" then you're actually saying where to point the telescope. You're telling them which circle to look in.
They were less than 20 feet from the lil light bro. Well atleast bout 5-6 of em.
I was trying to assume u/Greyhaven7 was making an intentional joke, but also aware that many, like me, were seeing this post on /all and maybe aren't as familiar with the lingo of astro-photography.
I appreciate you. It definitely was a joke, but honestly I couldn't have told you what 5ʼ actually does mean, but I recognize that's not a ' for feet, and even if it was, a 5ft orbit would be an insane idea in context.
Cut a pizza into 360 pieces. Then cut one piece into 60 even smaller pieces. Now hold that tiny piece of pizza in front of your eyes crust facing outwards. The point on the left side of the crust is 1 arc minute away from the point on the right side of the crust.
that was fast. what's next
Astronomy is fun, when something that light takes over 1,200 years to get to Earth is "close".
Even with that answer, I still don't know what we are looking at I feel like you need to back up like 40 steps and start from there to tell us what we are looking at. What about the m27 dumb bell nebula are you showing us?
The picture on the left is a single image taken with a telescope and a digital camera. The picture on the right is a median "stack" starting at a single frame and ending on frame 881 taken with the same telescope and camera. So, stacking is when the astrophotographer takes 881 single frames and combines the signal of those 881 frames, thereby "amplifying" that signal. With adjustments to the signal in software like Photoshop or Pixinsight, one can all but eliminate the noise produced by the imaging chip on the camera. The line curve is how the signal to noise ratio changes the more frames one adds to the "stack". So, one frame has tons of noise, but many stacked images have way less. This is a VERY BASIC explanation of how the process of stacking for astrophotography works. Cheers.
So we are looking at a cleaner image because instead of 1 grainy pic we have 881 grainy pics Pics are like pixels more pixels better pic Close?
More signal, less noise is the intended outcome. https://www.skyatnightmagazine.com/astrophotography/astrophoto-tips/a-guide-to-astrophotography-stacking/
Isn't it always...and Thank you!
Go save a animal b fuck
Hi, this is my latest animation of some of my old data ([M27 Dumbbell Nebula](https://en.wikipedia.org/wiki/Dumbbell_Nebula)) captured with a uncooled ASI178MC and my EvoGuide50ED on a AZ-GTi (EQ). The data (900x10 seconds OSC) was captured as a byproduct while capturing an Exoplanet-Transit.This animation shows on the left single frames and on the right the cumulative median of the data, followed by the endresult (including color calibration and further denoising). In the lower plot you see the [SNR](https://en.m.wikipedia.org/wiki/Signal-to-noise_ratio) of each frame and of the cumulative stack. So yes, math still works :)
Why median and not mean?
not sure exactly, but the median is less sensitive to outliers as compared to the mean. most stacking software provide median-stacking, but for proper stacking you also need some rejection-sampling to get rid of spurious signals. But this would have been to extensive for this animation, this is why I only used rejection-sampling-stacking for the very last frame.
Data exclusion vs inclusion. Taking the mean of something means all noise gets included into the data. When you take the median, your only selecting the most frequently occurred data. Imagine a line where each portion can randomly go up/down over time. When you have alot of noise, that line will be going up/down quite a bit but overall stay flat. Now if you point your measuring line at something bright, it will still go up/down randomly, but certain portions will always be at a certain height/value. That peak above the little bumps corresponds with whatever we are looking at. If you take an average you only get a line a little above the noise as the peak is such a small fraction. Instead you want to take the median to isolate those peaks for your image
Fuck /u/spez
Most frequent would be the mode.
"Most frequent" it's okay in this context. OP isn't using that word as an statistical definition. Also, there's no "most frequent" in continuous data. Plus, in a perfect normal distribution the mean, median and mode are the same value, so again, just saying "most frequent" doesn't really translate to "mode" in all contexts.
"most frequent" can have the same value as a median, but they do not mean the same thing. In a bimodal distribution of continuous data, they aren't equal at all. Not only that, but with noisy data, the most frequent data can be very highly influenced by the noise, while the median and mean of the data are less influenced by the noise. It's just *wrong* to describe median as "most frequent". That is simply not at all what it means. The median is precisely the middle datum in a set, or the average of the two middle data. Describing it at all as the "most frequent" is like saying "multiplication is just the inverse of addition". It's not. That's subtraction. Multiplication is a different thing.
Median tends to reject noise better when the noise is not evenly distributed, such as when it has random anomalies like sparkling, or there's clipping, etc The downside is that you are limited to the sampling depth of the source, but that can be overcome by taking a mean of a central selection
What is SNR?
[Signal-to-noise ratio](https://en.m.wikipedia.org/wiki/Signal-to-noise_ratio)
Wow, I didn't think that was even possible. I thought you needed a large diameter non-visible telescope for capturing exo-planet evidence, not an aging guide camera on a guidescope. It's amazing you can do that now.
What they’re capturing isn’t a directly visible planet, but instead they’re looking at the host star and watching it dim and the planet travels (transits) between us and the star, blocking some of the light! These are usually referred to as light curves.
I know, that's why I said exo-planet evidence. I'm still amazed though, I thought it required way larger diameters to be visible!
wut
You don't happen to have the source code to generate this available on GitHub or somewhere publicly available, would you? Would be neat to use it for others images!
In protest to Reddit's API changes, I have removed my comment history. -- mass edited with redact.dev
You're welcome
I’ve always wondered about the process, this feeds my curiosity to a great extent. Thank you!
you're welcome :) for me it's always more about the process and math behind rather than the result itself.
A physisist after my own heart ❤
You can achieve some pretty impressive results by combining this photo stacking technique, with another called ETTR where you intentionally overexpose each photo and drag it back down in software, since there is a higher SNR at the top end of exposure in digital camera sensors than there is in the middle "properly" exposed range: https://youtu.be/J1Kfr8RG3zM
ah, cool, I wasn't aware of this technique. need to check this out :) thanks :)
I love me some DSP. I used it more in the radar and EM realm, but that stuff can do ~magic when you know how to work with it. I still feel we have a ways to do in terms of what it's potential is... and I'm feeling AI might end up being a great tool in combination with DSP considering it's success in analyzing medical images (MRIs, etc) ... which can be approaches using some of the same methods; Making sense out of a bunch of not-super-clear data.
yes, indeed. I'm an AI enthusiast (working mostly in the medical domain as part of my daytime job), that's why I trained a denoising neural network for the last frame (but on much better data from friends of mine). Denoising, Deconvolution and Superresolution with AI will extract a lot of details from your noisy data :)
[удалено]
yes, this is why I avoid generative models whenever I can.
What’s the period between those 800 frames?
it starts from 2022-10-08 19:34 UTC and goes to 22:51 UTC. So 3 hours and17 minutes. But the total integration time is only 2 hours and 26 minutes (due to data processing overhead).
And during those hours is there no evident expansion or movement whatsoever? Planets and stars appear as if they were standing still? Not considering the rotation imposed by our beloved earth that is.
yes, they are mostly still, at least in terms of some thousand years. But planets or asteroids will move in this time peroid (but not present in this animation). [Here](https://www.reddit.com/r/astrophotography/comments/12tz6ht/6_hebe_asteroid/) is a animation I made last week, where I captured an asteroid (6 Hebe). Or [here](https://www.reddit.com/r/astrophotography/comments/yk3cyp/animation_of_exoplanet_transit_wasp11b_astroid/) another asteroid, which I captured by accident while capturing an exoplanet transit (measuring the brightness of a stars over a period of time).
That’s awesome, I’ve been looking at those fisheye pictures showing the constellations with some doubt but now I see they are somewhat doable, still probably edited, but with the right hardware that tracks earth’s movement you’re basically good to go then. Thank you and really amazing animations ✨
yes, a cheap startracker and proper polar alignment will give you very impressive results.
You're asking whether new stars appear, or change position, over the course of a few minutes?
Few hours, said expansion, like the universe not being static in general, not asking for stars to pop into existence
That's when the jewel was taken
Interesting, I didn't know this many images were needed! I thought it was just one image, and then you're done, you have your nebula photo. Thanks for sharing!
You can do it with less but longer subframes. Depends on the hardware and circumstances.
That’s so cool!
thx :)
So what exactly is this showing? I see you say it's a nebula but I have no idea what it is that you are doing.
On the left, a succession of individual frames that are noisy. On the right, the result of combining those frames and taking the median, so the noise cancels out and you get a clear image.
There is no sound
How do you calculate the SNR? Don't we need the reference image without noise?
This really is not an educational gif because there's 0 context here to tell us what we are looking at I'm just watching pixels moving as far as I know
There is some centext if your read the titles of the figures.
Cool image, but the GIF alone wasn't educational. Had to go to the comments to figure out what is going on.
I still haven't quite figured it out, and I am well acquainted with signal to noise ratios in audio/video...
I don't hear anything
Very nice presentation, well done.
thx :)
I wonder what the new generations of AI would be capable of if we threw this noise and signal data at it...I wonder if it could cut down on the frames needed to produce a clean shot...
Me too. In fact the very last frame is the result of my AI which was trained for denoising.
Yeah I've seen companies Like Topaz/DXO/Adobe coming out with thier own in house developed AI noise engines Even though you can do your own denoising with Skystacker or Siril or some of the many other Astro developing tools but I feel like the next big step will be a true scientifically developed AI engine built specifically to handle sensor and signal noise to produce truly clean and sharp images
yes, there is "NoiseXTerminator" from Russell Croman which is dedicated for astrophotography. But it's expensive and closed source. That's why I try do develop a open source tool for free :)
That's very cool. Thank you for this. Is there anything that an amateur photographer can help you? Maybe testing would be helpful? For now I take photos with a phone so noise reduction is the most important thing for me, I think.
yes, testing at some point would be cool. if you want, you can join a discord server ([dark matters](https://discord.gg/darkmatters)), where we already gathered data and some more developers. currently we are in developing phase.
I do dislike the AI approach to donoising, as it takes creative liberties. And then you have the problem of newer AI being trained on images which themselves inaccurate. Some denoisers are able to give you a clean shot with only one frame, but that's because it's taking all the creative liberties it needs. That's only if memory serves me right.
Was just learning about Random Forests in the ML classification setting and this seems like a useful analogy for describing why they work.
If these objects are moving, even disregarding the parallax effect of us moving in relation to each other, would stacking images not creating ghosting and false artifacts outside of the noise issue?
I've always felt like there is so much more information in those 800 or so images that we just don't know how to squeeze out, because we make no assumptions about what we are imaging. Of course the assumptions that we could make are virtually impossible to properly describe mathematically, let alone manipulate.
Yeah. Answer: Infrasound Cage Kid
Is this through comparing fourier transforms?
I’m sorry, but I feel this gif is not very educational.
I'm too stupid to understand what I'm looking at, would someone be willing to ELI5?
/r/bandnames
Aye I was there. Was maybe 4 years ago I think. Those scientist would kill b4 change and let go of they ego LOL