-
Notifications
You must be signed in to change notification settings - Fork 22.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add UTs for accelerator device-agnostic runtime APIs #133572
base: gh/guangyey/62/base
Are you sure you want to change the base?
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/133572
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (1 Unrelated Failure)As of commit 3edb0d1 with merge base 8b08559 (): FLAKY - The following job failed but was likely due to flakiness present on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
ghstack-source-id: 3b5de65108564ad6ba0cf32c8e60ffee13e052f8 Pull Request resolved: #133572
ghstack-source-id: 337ac201e83fda8799bbb69b5d77bd6bfb5fb9ed Pull Request resolved: #133572
ghstack-source-id: 6438fec6be2f0d751fbd37fd51bddcfeb6914fea Pull Request resolved: #133572
self.assertEqual(torch.current_accelerator(), "xpu") | ||
|
||
@unittest.skipIf(not TEST_ACCELERATOR, "no avaliable accelerators detected") | ||
def test_generic_multi_device_behavior(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add device to the input parameter. So that you can check if the current device type is the same as the input device type.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Refine the UTs to be more general.
ghstack-source-id: a1b18c010dedad2fa34d699c0b84a175a0de1df2 Pull Request resolved: #133572
ghstack-source-id: e41814af0228407ea8baba358a5042077900be3a Pull Request resolved: #133572
ghstack-source-id: 2b5145530fdfa5d44ae9a389df17c37a3f9c4cbb Pull Request resolved: #133572
ghstack-source-id: 70812fef1aef8ca55489f7232debeb5e0044bbe6 Pull Request resolved: #133572
ghstack-source-id: 2f42f774db9b05b6d60fcb47ef5aa531e24a08f0 Pull Request resolved: #133572
ghstack-source-id: a4c9660c2fc09af9e94047994d33b52ee0277730 Pull Request resolved: #133572
Unrelated failures, please refer to #138548 |
Hi @malfet , May I know if you could help review this separate PR? This PR aims to add some UTs to test these APIs introduced by the previous PR. |
test/test_accelerator.py
Outdated
class TestAccelerator(TestCase): | ||
def test_current_accelerator(self): | ||
self.assertTrue(torch.accelerator.is_available()) | ||
accelerators = ["cuda", "xpu"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why hip and mps are not part of the list?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hip
will be covered as it is masqued as cuda
. And added mps
to the list.
|
||
@unittest.skipIf((not TEST_CUDA) and (not TEST_XPU), "requires CUDA or XPU") | ||
def test_specific_stream_compatibility(self): | ||
s1 = torch.cuda.Stream() if torch.cuda.is_available() else torch.xpu.Stream() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as above, why hip and mps are not considered?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hip
will be covered as it is masqued as cuda. Here mps
is not considered as mps
doesn't have the device-specific stream torch.mps.Stream
.
@malfet May I know if I have addressed your comments? |
Stack from ghstack (oldest at bottom):
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10