Monday, November 30, 2020

Excellent Post About Using Topaz and PI

For all the crappines on CN, there's occasionally a really useful post:

https://www.cloudynights.com/topic/742075-i-gave-topaz-a-whirl/?p=10690138

I wouldn't be suprised if the mods remove it (or just move it) because it's too useful for the average idiot who reads CN.  😝

Sunday, November 29, 2020

Dealing with Artifacts in Autostakkert and Registax in Planetary Files

This is another one of those "note to myself" posts so I can remember how I got through some techincal problems with Autostakkert and Registax.

There are a number of reasons why you'll get artifacts in Registax while using wavelets.  Hm...I mean that there are many pathways to artifacts in Registax.  

(In the interest of clarity, I should mention that the artifact generally appears in Registax when using wavelets; or it appears in PS when sharpening.  Sometimes, the artifacts are so severe that you can see them in the stacked TIF file.  If you see the latter, it's not worth bothering with Registax or PS.  This means you need to move some alignment points (sometimes you even need to start over) and stack again.)


Here's a list of things that can lead/contribute to artifacts:

1)  In Autostakkert, putting alignment points OUTSIDE a planet's edge.  This sometimes happens if you let Autostakkert automatically insert alignment points.  It can obviously happen due to human error if placing them manually.

2)  In Autostakkert(AS), putting alignment points ON the planet's edge.  This sometimes is fine with really steady video footage.  But when you play your video in the preview window and you see the planet's edge moving a fair bit, that edge alignment point can someties be on the OUTSIDE of the edge.  (The lesson here is to move the alignment point INSIDE the edge by a fair bit.)

3) In AS, it's tempting to use a bunch of small alignment points all over.  The reason I've done this in the past is that -- in my experience -- AS will produce a relatively more detailed and contrasty stacked file with a bunch of smaller alignments points versus a small number of large alignment points.   But you are guaranteed to have a greater chance of artifacts with small alignment points.   Again, with really good video footage, you can get away with a lot of small alignment points.  With average data (or worse), you are asking for trouble.

4)  In AS, there's a "Quality Estimator" (QE) setting that I've discussed in the past.  It generates a graph that resembles a seismograph data sheet.  I've noticed that when the graph touches the bottom axis for more than a moment (guessing for 3% or more of the total), you may encounter weird ovals in the stacked image.  The fix is is to change the QE setting so that the generated graph stays off the bottom axis.***

5)  This is also related to the above remark...   If your QE graph is really compressed where it looks like a fat line instead of a zig-zag graph, then you may encounter artifacts that you cannot eliminate with ANY alignment point scheme.  Usually, there is some QE setting that will "widen" the line so that it resembles a zig-zag.

5) The video footage was shot TOO CLOSE to the edge of the frame.   I've had this happen due to laziness OR when trying to dodge a piece of dust in some part of the frame.   It gets to be a REAL problem if the planet goes off the edge for any period of time.  In the video footage preview window, you can "scrub" through the video by moving a slider.   You can also hit "delete" (space bar) to remove any "bad" frames where the planet is too close to the edge or has moved off the frame.  The problem is that if you had the planet move off the frame for more than a second, you will need to "spacebar" (remove) A LOT of frames.  Sadly, I've spent 20-30 minutes removing frames on just one clip.  It's monotonous and sometimes artifacts still appear.

6) Speaking of dust...  a dust bunny will get stacked and create its own artifact.

7) Undetected moon on Jupiter surface.  You really need to place an alignment point on the moon, but sometimes you can't quite see where the moon is located.  AS will think the moon is some surface detail, but it will look distorted.

7) Really NOISY video footage...  In an effort to really "push the edge" of the number of frames, I've shot at like 250+ fps with high gain settings.  The resulting footage is often noisy and will produce artifacts.    Also, if you are shooting through clouds or poor visibilty, you can end up with really noisy video with no details.   Noisy images don't inherently produce artifacts, it's just that they are LIKELY to produce artifacts depending on your alignment points.

8)  Overexposed image.   If you make the mistake of shooting with the histogram pushed to the right, you can often get artifacts.  This can easily happen when shooting Venus.   It gets brighter and brighter as it rises and you forget to change your settings.

9)  Alignment points near a terminator.  I've noticed that when setting alignment points near the terminator of Mars, they will produce artifacts that look like partial circles.  The solution is to try fewer alignment points with a much larger size.

There are probably tons more scenarios where you can get artifacts, but I'll add to this list as I remember more of them....

So, I'm kinda of getting to the point of usng small and medium alignment points with 8-10 larger alignment points interspersed around the edges.  Also, I'm using alignment points sparsely near the terminator of Mars.

 


There are some "best practices" to list after reflecting on all of this, but I'm too tired to compile them right now....  :(

 

***Sometimes you can't avoid the graph from hitting the bottom.  The fix has been to UNCHECK the "Laplace" option box.  But I've found that this will oftentimes result in stacking bad files with good files.   Not sure how to remedy this.




Wednesday, November 25, 2020

Another excerpt from a CN thread where bias is not allowed to be discussed

Bean614

    Apollo

  • *****
  • Posts: 1,317
  • Joined: 05 Nov 2015
  • Loc: Mass.

Posted Yesterday, 07:00 AM

"Fellows what do you think.....".....???????

 

Are women not allowed to comment?


  • dron2015 and plot0015 like this

#3 aaube

    Ranger 4

  • *****
  • Posts: 351
  • Joined: 10 Apr 2011
  • Loc: Trois-Rivieres, Canada

Posted Yesterday, 07:27 AM

That was uncalled for.

 

Bean614, on 24 Nov 2020 - 05:00 AM, said:

"Fellows what do you think.....".....???????

 

Are women not allowed to comment?


#4 Bean614

    Apollo

  • *****
  • Posts: 1,317
  • Joined: 05 Nov 2015
  • Loc: Mass.

Posted Yesterday, 07:29 AM

Why?


#5 imtl

    Apollo

  • *****
  • Moderators
  • Posts: 1,355
  • Joined: 07 Jun 2016
  • Loc: On earth

Posted Yesterday, 07:52 AM

Moderator hat : ON

 

Because,

 

Fellow: "a person in the same position, involved in the same activity, or otherwise associated with another."

 

And also because the OP is not native in English and there is a language barrier. 

 

Now please no more of this and STAY ON TOPIC. We welcome all astronomers here and we don't really care who and what you are. Just be nice and enjoy the hobby and communicating with all people.

 

Moderator hat: OFF.

 ---------------------

More like "ideological police cap": ON forever.

 

Thursday, November 19, 2020

Myths of Small Aperture Beating Large Aperture

 I was looking for informtion on the pros and cons of the C14 vs C11 and came across this post on CN:

RickV

Posted 25 June 2016 - 01:00 AM

...

I just came home tonight from a star party.  Seeing was average.  My Orion 120mmED 'apo' doublet outperformed a 25 inch Dobsonion - showing more detail on Jupiter, Mars and Saturn.  But... if seeing were excellent, then I have no doubt the 25 inch circus cannon would have blown me out of the sky....

 

Sometimes a myth becomes a delusion and this is one of those cases.  There are one of several problems that could explain the account.   The first possiblity was that the dob in question was not collimated properly so no views above maybe 100x would be sharp.   A second possibility is that the optics could have been bad -- we have no info on the type of mirror/secondary.   There were tons of crappy large mirrors made in the 80's, 90's and early 2000's by "reputable" opticians where the scopes could never deliver better than 150-200x performance on any night.  It's also possible that the optics were not cooled and were subsequently significantly overcorrected.   Large dobs need a few hours to acclimate (with fans runing) and this seems a possibility.

The other rarely mentioned situation is the inexperience (stupidity?) of the observer.  I've been with observers at star parties who have NEVER looked through a big dob (25+ inches).  Some newbish observers can't deal with Mars or Jupiter being so bright.   They are literally blinded and can't see any contrast on planets even though the view is presenting immense detail.  OR when observing faint features in a galaxy, the observer can't make out any detail because 99% of their observing is done on bright DSO's and planets through small scopes.  Again, low-contrast detail is lost to the self-declared expert who has no business looking through large telescopes. 

But the myth goes on and on -- especially on CN.  The little scope outperforms the giant scope in all "real" situations.  The little scope can beat the seeing better than a large scope because of atmospheric cells, etc, etc.  The back-handed compliment of the big scope being a "canon" -- obviously the wrong tool for astronomical observation.  And yet the million-dollar observatories with their "big scopes" must obviously be doing something wrong.  They really should be using $500 80mm refractors because in the "real world", the small scope always outperforms the big ones 90% of the time. 

Yeah...

Added later..... more non-sense from a CN idiot:

aztrodog

    Sputnik

  • -----
  • Posts: 48
  • Joined: 06 Sep 2010
  • Loc: South Florida

Posted 25 September 2020 - 11:15 PM

I appreciate different points of view, so here is mines to balance out some of the earlier postings. Personally I did not find either of my two superb 14” SCT or friend’s high end Dob to provide better, more detailed views than either a 6” or 7” APO. I owned two C14s, both of which I used extensively under South Florida steady skies. I also had access to my friend’s 16” Starstructure / Zambuto mirror. As good as those scopes were, I vastly preferred the views in my 7” APO and my friend’s Tak152. The purity and aesthetically pleasing views in the refractors were simply unmatched by the larger scopes. Trust me, from the financial point of view I would have loved for the SCTs or Dob to blow away, leave in the dust or _______ (fill in disproportionate statement) the 7” APO.

----------------------

Purity of views, eh?   What's next, Hitler had the best optics because he was all about racial purity?  C'mon.  Also, whenever someone says, "trust me..." when it comes to telescopes, it's usually bullshit.   The properly cooled and collimated C14 will always give the same (or better) views AT THE SAME MAGNIFICATION.   This numbskull is likely comparing his 7" APO at 100x vs the C14 at 300x.  And yes, the 7" APO will provide a crisper view at that unfair magnification difference.    But push the 7" apo to 300x and you will see a dimmer, fuzzier view.   Push both scopes on a night of decent (you don't even need perfect) seeing to 500x and you'll see the real difference.  

I think there's also a HUGE issue that no one focuses on (ha): Experience with small AND low-contrast detail.  I've mentioned this before but I think some people have the equivalent of 480p resolution in terms of eyesight while others have 4k resolution.   What I mean is that some people CAN'T MAKE OUT SMALL DETAIL in an image.   On top of that, I believe some people CAN'T DISCERN ANY SHADES OF SUBTLE CONTRAST.  Contrast differences have to be huge for some people.  It never occurs to them that they may be "unskilled" as a visual observer.  

We are a visual culture and it never occurs to anyone that there may be different degrees of visual acuity amongst the population.  Some of it is natural, but for visual astronomers, it's also skill-related.

I guess if you don't see it, it doesn't exist.   Which I think is a perfect metaphor for blindness.

And don't get me started on the "balance out" metaphor.  

There's a lot of ridiculous, erroneous and just plain bad discussion on the Refractor forum.  I think a lot of it is transparently an attempt to justify one's current equipment list -- especially if you own an expensive APO.

-----------------------------

Another example of "small scope delusion":

bobhen

    Aurora

  • *****
  • Posts: 4,653
  • Joined: 25 Jun 2005

Posted 13 December 2020 - 07:32 AM

Jon Isaacs, on 12 Dec 2020 - 3:35 PM, said:

In my experience, that is not the case. The 6 inch is being limited by seeing and diffraction, a larger scope is only limited by seeing.

 

This is easily seen with double stars. It takes better seeing for a 4 inch to split a 1.15" (Dawes limit) double than for an 8 or 10 inch to split that same double.. The airy disk is smaller.. i remember one night a year or two back.. Antares was low on the horizon, I took a look in the 22 inch, I thought I saw the companion. I cranked it up, there is was.. not pretty but  bright and widely split.. 350x.  It's a tough split in a 5 inch in decent seeing.. That's when I realized just how small that airy disk is.. It's 0.50" in diameter to the first minimum. It can take a lot of aberration from the seeing. 

 

 

These "limits" are not hard and fast, there is no point at which better seeing won't help a given scope show more.. if you set the limit for a 6 inch at 1", the planetary views will still be better in 0.5" seeing.

 

Jon

My experience is different...

 

I have compared my 210mm Mewlon to my Tak TSA 120 side-by-side on many nights. They ride side-by-side on he same mount.

 

The Mewlon has smooth mirrors and high contrast and is almost twice the diameter of the TSA 120 and yet on many nights does not show any more planetary detail than the 120mm refractor.

 

It takes above average seeing and the planets to be reasonably placed for the Mewlon to “start” to pull away. And these were side-by-side observations. Even my AP 155 refractor bumped into seeing on most nights.

 

In truly excellent seeing, a high quality 6” refractor (or any high quality telescope) can do 90-100x per-inch on Saturn. How many nights did that happen in the 17-years I owned the 6” refractor – zero. Some nights I did use 450x but most nights the scope was running around 275x, far below its capability – because of the seeing and nothing else.

 

Roland Christen used his 10” Mak to observe the Encke Gap in Saturn’s rings using over 800x when he was in Florida. He never saw that feature from his observatory back in IL, nor did the scope use that much power. It can’t from that location.

 

If one isn’t using 90 to 100x per inch on Saturn then seeing is limiting the scope’s capability or the optics aren’t good enough or both.

 

Chaz (CHASLX200) has posted here many times that he has used 1000x on the planets with his high quality Newtonians in his excellent Tampa Florida seeing. I had a 15” Dobsonian for many years with a near perfect Galaxy mirror and never came close to those powers from my PA location.

 

That’s my experience, for what it’s worth.

 

Bob

-------------------------

I've seen A LOT of bad posts by this bob character.   I still believe that some people can't see low-contrast detail through telescopes.  Bob is obviously one of them.  A low-contrast but detailed image appears as a fuzzball to some very unskilled but long-lived observers.   It's unfortunate.



Monday, November 9, 2020

No, the new iPhone doesn't take amazing astrophotos.

 https://www.macrumors.com/2020/11/09/austin-mann-iphone-12-pro-max-camera-review/

 >sigh<

Show me a detailed picture of the Spaghetti Nebula (Sharpless 2-240) and I'll be impressed.


Your current phone can take the same picture with the same very lame result.   



Progressing on Processing Planets (another boring meandering post to myself)

I've figured some things out but I'm still struggling with bringing out subtle detail.....

 

 

On the left is an image that I processed back in August.   And on the right is the same data processed on 11/9/20.  I was able to preserve much more subtle detail with techniques that I've learned in the past few weeks.   The images reflect only 60 seconds of video footage so it's not great to begin with.

What have I learned?

I've been working on Saturn and Jupiter RGB images in an attempt to settle on a workflow for planetary images.  I've added Winjupos to the workflow and though it works well to eliminate noise I can't really push the data much with additional sharpening.  I can do some stuff in Camera Raw Filter like "Texture", "Clarity", and "Dehaze".  But those enhancements come at the cost of resolution and noise.  I was also sharpening in Camera Raw and I noticed that it would sharpen unevenly across the planet surface.  Specifically, it sharpens the middle 75% more dramatically than the outer 25%.   The result is a little weird as it made the center of the planet look like it was better focused than the outer parts.

All this made me wonder if I was doing destructive enhancements in Registax.  

 

 

 

I wondered if it was a mistake to push the sliders in Registax very much before Photoshop.  I mean, my workflow was that I stacked (maybe with 25-50%) in Autostakkert, then brought it in to Registax, did the autostretch, then pushed the sliders far enough to see decent detail on the planet surface.  I would then bring the Registax enhanced R, G, and B channels into PS and merge them into an RGB image.  From there I would do an alignment of color channels, then jump into Camera Raw to increase texture, clarity, etc. 

I began to notice that the planetary image would have a pretty bright, offset area on the surface.   Even with the Highlights and Whites sliders pulled down in Camera Raw it was still a challenge to not blow out a section of the image.

This is when I had a harebrained idea to use HDR Toning (Image --> Adjustments --> HDR Toning) to flatten the brightness of the surface.  Then I would do an Autocontrast to bring everything back up to an appropriate brightness.  I processed two whole RGB data sets this way.  And the results looked "cartoony" or a little bit like I had painted a watercolor version of Jupiter.  

It never occurred to me to pixel peep the image before and after HDR Toning.  Well, I finally did and it was definitely a destructive process.   DON'T DO THIS AT HOME, KIDS.  It's like the resolution dropped by 30%.  

I thought maybe I need to go back to the data when it's in Registax.  Registax wavelets can be a bit confusing.   A lot of people seem to like "Dyadic" mode.  In my experience, Dyadic tends to bring out a lot of artifacts.  After A LOT of finagling between Registax and PS, I was able to figure out that LESS wavelets in Registax means more processing wiggle room in PS.

The first thing I changed was the initial autostretch that Registax asks about...




I'm not a 100% sure about this, but I *think* my limited stretching in PS was partly due to the fact that I let Registax stretch the image.

Furthermore, I found that "Linear" mode is less likely to bring out artifacts whilst moving the sliders.  Specifically, in "Linear" mode, the best setting to prevent artifacts was to set "Initial Layer" to "3".  A setting of 2, 4, and 5 can work as well.


 

As far as the layers, I found that it depended A LOT on the quality of the data.  The higher the quality, the higher level layers, ie "3" or "4", can be beneficial.  If the seeing conditions were not great, you generally want to manipulate lower layers 1-3.  The picture above is not the best (ha) to illustrate this point.  In fact, that Layer 2 slider is probably 50% too aggressive.   But the reason I wanted to include this shot was to show that Registax does a very good job (about 80% of the time) of aligning RGB.  So after combining the RGB channels in PS, I come back to do a quick RGB alignment, then continue back in PS.

For awhile, after Registax, the first thing I would do in PS was to go directly into Camera Raw Filter and start doing things there.   But what I never consistently tried was sharpening first before doing anything else.   Specifically, I never just jumped right into Smart Sharpen.   A few days ago, I tried Smart Sharpen as my first process in PS after a light Registax setting.  It worked REALLY well in terms of sharpening without compromising the resolution.   

 


Smart sharpen first!   Use Camera Raw Filter later!  I've tried this change on a few test images and it definitely helps.   The first image of this blog post shows the difference.  (And what's the nudging all about?  The first thing you do actually is to ALIGN the red-green-blue channels.   Unless the planet is like 75 degrees or higher, the OSC color image will have channel misalignment.  The difference betwen aligned and un-aligned is subtle but I'll take as much improvement as possible.)

I'm going to end this long post by elaborating on some points I made in the preceding blog post.  In Autostakkert, you can set a quality estimator value as you "Analyse" your video frames.  The process basically orders the frames in terms of the best first and the worst frames last.   So when you select 40% you are only stacking the best 40%.  This is all well in theory.  

A month ago when I was trying to do some preliminary processing, I sometimes found that there were good and bad images in the initial part of the stack.  A couple times the first few images would be completely blurry.   After spending an embarassing amount of time deleting those bad images (you can hit the space bar on a bad frame and it gets remove),  I finally just tried methodically generating stacked images with differnt Quality Estimator values. 

 

 

So I took the same video footage and ran it through Autostakkert with the same percentage of frames and the same alignment points.  The only thing I changed was the Quality Estimator value.  I was skeptical about significant improvement.

Below are the results after a mild stretch in Regtistax.   All three used the same Registax stretch.  But it's obvious (especially in the TIF file) that a setting of 8 is the best.

(click to enlarge)

Considering Flagstaff's lousy seeing conditions, I think a Quality Estimator setting of "8" should be the default.