Skip to content

Conversation

@anderspapitto
Copy link
Contributor

This is a first pass to get feedback

@anderspapitto
Copy link
Contributor Author

now removes nop transposes, consecutive transposes that cancel out, and transposes-into-Gemm (which can just be a parameter to Gemm)

This comment was marked as off-topic.

This comment was marked as off-topic.

@ezyang
Copy link
Contributor

ezyang commented Nov 7, 2017

Thanks, this looks great! I wrote one minor comment about generalizing transpose/permutation fusion, but everything looks fine.

Before we merge, we're going to need a test. Do you know how to operate the accept test machinery?

@ezyang
Copy link
Contributor

ezyang commented Nov 7, 2017

@pytorchbot test this please

@anderspapitto
Copy link
Contributor Author

generalized logic, cleaned up code and comments, and corrected several bugs related to mutating shared references and to resource deallocation.

@anderspapitto anderspapitto changed the title Optimizer: remove consecutive, canceling transposes Optimizer: optimize transposes in variety of circumstances Nov 7, 2017
@anderspapitto anderspapitto force-pushed the fuse branch 2 times, most recently from f591005 to b4a776b Compare November 8, 2017 20:22
Copy link
Member

@houseroad houseroad left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, this is great!

BTW, what happens to our CI?

This comment was marked as off-topic.

@dzhulgakov
Copy link
Collaborator

Looks great! As for the tests - it seems that the ordering of ops in backward pass is not deterministic (https://travis-ci.org/pytorch/pytorch/jobs/299841429). It's not related to this diff, but @zdevito and @ezyang might be interested in figuring out a better way to unittest graphs.

This comment was marked as off-topic.

@anderspapitto anderspapitto force-pushed the fuse branch 5 times, most recently from 6435199 to b60bb9e Compare November 17, 2017 19:54
@dzhulgakov
Copy link
Collaborator

Are we going to land it (I guess the only missing part is tests) or do you plan to put it in the separate transforms library?

@anderspapitto
Copy link
Contributor Author

I would say no reason not to land it - and also some tests are here ezyang/onnx-pytorch#40

@ezyang ezyang added the oncall: jit Add this issue/PR to JIT oncall triage queue label Nov 27, 2017
Anders Papitto added 2 commits November 28, 2017 13:34
- No-op transposes
- Consecutive transposes (fuse them)
- Transposes into Gemm (fuse them into transA/transB parameter)
@ezyang
Copy link
Contributor

ezyang commented Nov 28, 2017

@pytorchbot test this please

@ezyang ezyang merged commit 67c3cbd into pytorch:master Nov 28, 2017
dzhulgakov pushed a commit to dzhulgakov/pytorch that referenced this pull request Dec 2, 2017
)

* Optimizer: Optimize transposes in variety of circumstances

- No-op transposes
- Consecutive transposes (fuse them)
- Transposes into Gemm (fuse them into transA/transB parameter)

* touch up out of date comment
soumith pushed a commit that referenced this pull request Dec 4, 2017
* Optimizer: optimize transposes in variety of circumstances (#3509)

* Optimizer: Optimize transposes in variety of circumstances

- No-op transposes
- Consecutive transposes (fuse them)
- Transposes into Gemm (fuse them into transA/transB parameter)

* touch up out of date comment

* Backporting optimizer changes
soumith pushed a commit that referenced this pull request Dec 4, 2017
* Optimizer: optimize transposes in variety of circumstances (#3509)

* Optimizer: Optimize transposes in variety of circumstances

- No-op transposes
- Consecutive transposes (fuse them)
- Transposes into Gemm (fuse them into transA/transB parameter)

* touch up out of date comment

* Backporting optimizer changes
peterjc123 pushed a commit to peterjc123/pytorch that referenced this pull request Dec 4, 2017
* Optimizer: optimize transposes in variety of circumstances (pytorch#3509)

* Optimizer: Optimize transposes in variety of circumstances

- No-op transposes
- Consecutive transposes (fuse them)
- Transposes into Gemm (fuse them into transA/transB parameter)

* touch up out of date comment

* Backporting optimizer changes
@anderspapitto anderspapitto deleted the fuse branch January 18, 2018 21:33
wuhuikx pushed a commit to wuhuikx/pytorch that referenced this pull request Jan 30, 2020
* Optimizer: optimize transposes in variety of circumstances (pytorch#3509)

* Optimizer: Optimize transposes in variety of circumstances

- No-op transposes
- Consecutive transposes (fuse them)
- Transposes into Gemm (fuse them into transA/transB parameter)

* touch up out of date comment

* Backporting optimizer changes
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

oncall: jit Add this issue/PR to JIT oncall triage queue open source

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants