-
Notifications
You must be signed in to change notification settings - Fork 53
Added docstring and removed unused no-op class #1135
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This model does not do anything (it's only references in one test, where it is explicitly skipped) and it's confusing to have this do-nothing model of the same name as the model in astro/instrument that actually does complicated stuff.
Codecov Report
@@ Coverage Diff @@
## main #1135 +/- ##
==========================================
+ Coverage 64.71% 64.81% +0.09%
==========================================
Files 171 171
Lines 58616 58656 +40
Branches 3736 3742 +6
==========================================
+ Hits 37934 38018 +84
+ Misses 20390 20348 -42
+ Partials 292 290 -2
Continue to review full report at Codecov.
|
| in_ehi = numpy.setdiff1d(ehi, elo) | ||
| if len(in_elo) > 1: | ||
| ehi = numpy.concatenate((ehi, in_elo[-(len(in_elo) - 1):])) | ||
| ehi.sort() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, this routine is badly named since it doesn't care about energies, but that's not really relevant for this review (I'm not suggesting we change that) except that throughout the code when dealing with energy grids I see checks along the lines of
if elo[0] > ehi[-1]:
ehi, elo = elo, ehi # or some other "reversal"
(unfortunately I can't remember where or I'd point to one here as that would be more helpful). This makes me suspicious about the assumptions of energy-grid handling. I think we are guaranteed that
ehi[i] > elo[i]
but can we guarantee that the arrays are in increasing order - i.e.
elo[i + 1] > elo[i]
Or, my personal belief, someone once saw something odd and started adding in these checks as we don't have good documentation on the preconditions of a lot of our internal routines.
I note that the sort statements added here are not actually fired during testing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can add tests for that. It's only needed for partially overlapping grids (I did some interactive playing around this that). I think the example in the doc (but we don't run the doctests currenltly I think) should hit that line.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, the tests don't seem to catch this case - which only fire when you have a dis-joint grid like
xlo = [10, 20, 40]
xhi = [20, 30, 60]
I think (as in it's Friday evening, so I am not at my sharpest right now but still...) the code
ehi = numpy.concatenate((ehi, in_elo[-(len(in_elo) - 1):]))
can be simplified to
ehi = numpy.concatenate((ehi, in_elo[1:]))
And ditto for the elo line. Does this make sense to you
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think such a disjoint grid should probably raise an error. But compile_energy_grid takes an arbitrary number of input grids, not just two. This should come up when I have more than 2. I'll add a test case for that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
More as a parenthetical than a change needed for this PR, but I wonder if my comment at #1135 (comment) about the elo/ehi checks was leading up to our discussions in #1163 that yes, we do need checks like this....
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Apparently we do need them. Not not at every location.
|
I added the requested test. Can you review again @DougBurke ? |
DougBurke
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for cleaning up the docstring and adding the extra tests. Much appreciated.
PR sherpa#1135 removed code but we never caught the documentation reference to the MultiResponseSumModel class.
PR sherpa#1135 removed code but we never caught the documentation reference to the MultiResponseSumModel class.
Summary
Remove
sherpa.astro.models.MultiResponseSumModel, which did nothing and seems to be unused (except in a test that explicitly skip this class).Add a doctring to an internal function.
Details
It's confusing to have such a no-op class with the same name as the
sherpa.astro.instrument.MultiResponseSumModel, which does things. I suspect that thesherpa.astro.models.MultiResponseSumModelwas put in as a place-holder and then forgotten, but maybe the full tests show me a use case that I've overlooked before.Notes
This is part of my attempt to get a specific plot to work for a dataset with multiple responses and I spend so much time to understand what these functions do, that I wanted to document it. More might be coming as I dig deeper, but following @DougBurke lead, I'm going to open smaller PRs that are easier to review rather than one big one that combines many different aspects.
Timing: I'm opening this PR now because I need a specific feature so I'm looking into the code now and might as well push it up so I don't forget about it. It can sit on github until after 4.13.1 is out.