Skip to content

Conversation

@hamogu
Copy link
Contributor

@hamogu hamogu commented Apr 29, 2021

Summary

Remove sherpa.astro.models.MultiResponseSumModel, which did nothing and seems to be unused (except in a test that explicitly skip this class).

Add a doctring to an internal function.

Details

It's confusing to have such a no-op class with the same name as the sherpa.astro.instrument.MultiResponseSumModel, which does things. I suspect that the sherpa.astro.models.MultiResponseSumModel was put in as a place-holder and then forgotten, but maybe the full tests show me a use case that I've overlooked before.

Notes

This is part of my attempt to get a specific plot to work for a dataset with multiple responses and I spend so much time to understand what these functions do, that I wanted to document it. More might be coming as I dig deeper, but following @DougBurke lead, I'm going to open smaller PRs that are easier to review rather than one big one that combines many different aspects.

Timing: I'm opening this PR now because I need a specific feature so I'm looking into the code now and might as well push it up so I don't forget about it. It can sit on github until after 4.13.1 is out.

hamogu added 2 commits April 28, 2021 22:04
This model does not do anything (it's only references in one test, where it is explicitly skipped)
and it's confusing to have this do-nothing model of the same name as the model in
astro/instrument that actually does complicated stuff.
@codecov
Copy link

codecov bot commented Apr 29, 2021

Codecov Report

Merging #1135 (2f6b4de) into main (be5faac) will increase coverage by 0.09%.
The diff coverage is 100.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##             main    #1135      +/-   ##
==========================================
+ Coverage   64.71%   64.81%   +0.09%     
==========================================
  Files         171      171              
  Lines       58616    58656      +40     
  Branches     3736     3742       +6     
==========================================
+ Hits        37934    38018      +84     
+ Misses      20390    20348      -42     
+ Partials      292      290       -2     
Impacted Files Coverage Δ
sherpa/astro/models/__init__.py 78.85% <ø> (-0.10%) ⬇️
sherpa/astro/utils/__init__.py 79.51% <100.00%> (+0.80%) ⬆️
sherpa/_version.py 30.71% <0.00%> (-5.72%) ⬇️
sherpa/sherpa/astro/ui/tests/test_serialize.py 53.58% <0.00%> (-0.21%) ⬇️
sherpa/sherpa/astro/models/__init__.py 75.11% <0.00%> (-0.11%) ⬇️
sherpa/astro/plot.py 84.83% <0.00%> (ø)
sherpa/models/model.py 91.21% <0.00%> (ø)
sherpa/plot/__init__.py 84.13% <0.00%> (ø)
sherpa/sherpa/astro/plot.py 71.61% <0.00%> (ø)
sherpa/sherpa/models/model.py 91.21% <0.00%> (ø)
... and 18 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update be5faac...2f6b4de. Read the comment docs.

in_ehi = numpy.setdiff1d(ehi, elo)
if len(in_elo) > 1:
ehi = numpy.concatenate((ehi, in_elo[-(len(in_elo) - 1):]))
ehi.sort()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, this routine is badly named since it doesn't care about energies, but that's not really relevant for this review (I'm not suggesting we change that) except that throughout the code when dealing with energy grids I see checks along the lines of

if elo[0] > ehi[-1]:
    ehi, elo = elo, ehi  # or some other "reversal"

(unfortunately I can't remember where or I'd point to one here as that would be more helpful). This makes me suspicious about the assumptions of energy-grid handling. I think we are guaranteed that

ehi[i] > elo[i]

but can we guarantee that the arrays are in increasing order - i.e.

elo[i + 1] > elo[i]

Or, my personal belief, someone once saw something odd and started adding in these checks as we don't have good documentation on the preconditions of a lot of our internal routines.

I note that the sort statements added here are not actually fired during testing.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can add tests for that. It's only needed for partially overlapping grids (I did some interactive playing around this that). I think the example in the doc (but we don't run the doctests currenltly I think) should hit that line.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, the tests don't seem to catch this case - which only fire when you have a dis-joint grid like

xlo = [10, 20,  40]
xhi = [20, 30, 60]

I think (as in it's Friday evening, so I am not at my sharpest right now but still...) the code

    ehi = numpy.concatenate((ehi, in_elo[-(len(in_elo) - 1):]))

can be simplified to

    ehi = numpy.concatenate((ehi, in_elo[1:]))

And ditto for the elo line. Does this make sense to you

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think such a disjoint grid should probably raise an error. But compile_energy_grid takes an arbitrary number of input grids, not just two. This should come up when I have more than 2. I'll add a test case for that.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

More as a parenthetical than a change needed for this PR, but I wonder if my comment at #1135 (comment) about the elo/ehi checks was leading up to our discussions in #1163 that yes, we do need checks like this....

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Apparently we do need them. Not not at every location.

@hamogu
Copy link
Contributor Author

hamogu commented May 21, 2021

I added the requested test. Can you review again @DougBurke ?

Copy link
Contributor

@DougBurke DougBurke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for cleaning up the docstring and adding the extra tests. Much appreciated.

@wmclaugh wmclaugh merged commit 572231b into sherpa:main Jun 2, 2021
@hamogu hamogu deleted the docstrings branch June 2, 2021 11:05
DougBurke added a commit to DougBurke/sherpa that referenced this pull request Jun 7, 2021
PR sherpa#1135 removed code but we never caught the documentation
reference to the MultiResponseSumModel class.
DougBurke added a commit to DougBurke/sherpa that referenced this pull request Jun 10, 2021
PR sherpa#1135 removed code but we never caught the documentation
reference to the MultiResponseSumModel class.
@wmclaugh wmclaugh added this to the 4.14.0 milestone Jun 14, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants