-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Back out "Fixing a bug where allocating a 4GB block results in using 8GB of memory (#95827)" #96796
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/96796
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit fcfe9dc: This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
This pull request was exported from Phabricator. Differential Revision: D44080273 |
1 similar comment
|
This pull request was exported from Phabricator. Differential Revision: D44080273 |
1b84c0a to
109b4b2
Compare
|
This pull request was exported from Phabricator. Differential Revision: D44080273 |
109b4b2 to
9b60405
Compare
|
This pull request was exported from Phabricator. Differential Revision: D44080273 |
9b60405 to
def63c7
Compare
|
This pull request was exported from Phabricator. Differential Revision: D44080273 |
1 similar comment
|
This pull request was exported from Phabricator. Differential Revision: D44080273 |
def63c7 to
59045a2
Compare
|
This pull request was exported from Phabricator. Differential Revision: D44080273 |
59045a2 to
b0fbd4b
Compare
|
cc @akamali, looks like 32MB caching threshold is too aggressive - can you make the thresholds configurable via env vars, similarly to how it's done for cuda caching allocators? |
|
This pull request was exported from Phabricator. Differential Revision: D44080273 |
…8GB of memory (pytorch#95827)" (pytorch#96796) Summary: Pull Request resolved: pytorch#96796 Original commit changeset: a19273017a2a Original Phabricator Diff: D43969564 Parameters set too aggressively hurting some critical use cases. We should revisit this idea with more rigorous testing -- at least configurable parameters (use GFLAGs?) with conservative default values (e.g., INF for pinned memory caching size limit). ----------------------------------------------------------------------------------------------------------------------- Test Plan: This is an unland diff. unlandayc Reviewed By: terrycsy, banitag1, yuchenhao, liangluofb Differential Revision: D44080273 fbshipit-source-id: ce587db7db3db4bd0ebcabdcba4c3b39fd2424c0
b0fbd4b to
9e4ca4a
Compare
jianyuh
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will stamp to unblock and then let @akamali / @jaewonlee-fb to figure out more proper approach to land the original PR.
|
@pytorchbot merge |
Merge failedReason: This PR needs a label If not, please add the For more information, see Details for Dev Infra teamRaised by workflow job |
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: 1 jobs have failed, first few of them are: trunk / win-vs2019-cuda11.7-py3 / test (default, 3, 5, windows.g5.4xlarge.nvidia.gpu) Details for Dev Infra teamRaised by workflow job |
|
This pull request was exported from Phabricator. Differential Revision: D44080273 |
9e4ca4a to
bf2de70
Compare
…8GB of memory (pytorch#95827)" (pytorch#96796) Summary: Pull Request resolved: pytorch#96796 Original commit changeset: a19273017a2a Original Phabricator Diff: D43969564 Parameters set too aggressively hurting some critical use cases. We should revisit this idea with more rigorous testing -- at least configurable parameters (use GFLAGs?) with conservative default values (e.g., INF for pinned memory caching size limit). ----------------------------------------------------------------------------------------------------------------------- Test Plan: This is an unland diff. unlandayc Reviewed By: terrycsy, jianyuh, banitag1, yuchenhao, liangluofb Differential Revision: D44080273 fbshipit-source-id: 7743a8d53e5fdd11aa72843802e2abc31585211a
|
This pull request was exported from Phabricator. Differential Revision: D44080273 |
bf2de70 to
fcfe9dc
Compare
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
…8GB of memory (#95827)" (#96796) Summary: Original commit changeset: a19273017a2a Original Phabricator Diff: D43969564 ----------------------------------------------------------------------------------------------------------------------- Test Plan: unlandayc Reviewed By: terrycsy Differential Revision: D44080273 Pull Request resolved: pytorch/pytorch#96796 Approved by: https://github.com/jianyuh, https://github.com/davidberard98
…8GB of memory (#95827)" (#96796) Summary: Original commit changeset: a19273017a2a Original Phabricator Diff: D43969564 ----------------------------------------------------------------------------------------------------------------------- Test Plan: unlandayc Reviewed By: terrycsy Differential Revision: D44080273 Pull Request resolved: pytorch/pytorch#96796 Approved by: https://github.com/jianyuh, https://github.com/davidberard98
Summary:
Original commit changeset: a19273017a2a
Original Phabricator Diff: D43969564
Test Plan: unlandayc
Reviewed By: terrycsy
Differential Revision: D44080273