Tags: gorgonia/gorgonia
Tags
Fixed a bug in Reshape, where a slice of a slice cannot be reshaped. (#… …359) This is fixed in two ways: 1. If the input is a `View` then it is materialized. This requires more memory, but it is more worth it to allocate than to stress over calculating how much extra overhead to allocate for sharing memories 2. `ShallowClone` is fixed in package `tensor` to make sure that views are correctly shallow cloned as well
A bunch of supporting functions for Golgi (#306) * Added KeepDims as an additional function to "decorate" another function Cleaned up Ones and ones * Added broadcasted operations to api_gen Wrote program to generate those broadcasted ops Renamed BroadcastMul to BroadcastHadamardProd. BroadcastMul is coming soon * added an example to show how one may use the broadcasting operations to create dense triangular matrices * Added better support for BatchedMatMul. Now more than 3D tensors are supported! * Added unaryOp interface to genapi. Generating the interfaces makes the interfaces more consistent. Previously inversef32 gave the wrong ʘUnaryOperatorType * Allow axis to be defined in SoftMax. Furthermore the default axis is now the last axis. This allows for SoftMax to be done across ndarrays Added more examples * Ported Unconcat to Gorgonia. Added tests * Added some things for future * Added more support functions for Golgi * added some statistics generation for genapi * Added monad-y error handling to Gorgonia * Let's do away with the DoXXX functions * Changed the definition of LiftResult a bit. * added some helper functions * Updated Unconcat tor use Nodes instead of []*Node This allows for easier lifting of the return value, however its utility is not known at the moment. * Added HeEtAl InitWFn * Ugh. Copy and pasting sux when you can only type with one hand * Squashed commit of the following: commit 592126c Author: Ben Leitner <7515022+bdleitner@users.noreply.github.com> Date: Sun Nov 17 15:09:08 2019 -0800 Refactor the max/sum ops to share common code. Have the type/inferShape/Do methods behave in a consistent manner: (#346) * Dimensions specified in the "along" parameter are reduced to size 1, but not removed. (Note: this caused TestRepeatOpDoDiff, but this version fixes it. Perhaps we should make preserving the size-1 dimensions an option of the reduction op?) * If all dimensions are included, the result will be a scalar. * If all dimensions but 1 are included, the result is a vector, regardless of which dimension is left intact. Tests verify that the resulting nodes have the expected shape. Note: While here, fix a warning on Max's SymDiff where retVal[0] is set when retVal has not been initialized. I wonder if this is related to #323 where SymDiff for StableSoftMax (which uses Max) was failing with a panic (probably not, as the error message there seems unrelated, but probably a good fix anyway). Closes #326 commit 6fd05db Author: Olivier Wulveryck <owulveryck@users.noreply.github.com> Date: Tue Nov 12 09:15:56 2019 +0100 Examples/readme (#351) * chore(readme): add references to the gorgonia website commit e6bc7dd Merge: 9ecd7d0 d1d231f Author: gareth <31232838+jokebroker@users.noreply.github.com> Date: Sat Nov 9 06:47:29 2019 +1100 Merge pull request #350 from mattn/fix-gomod Fix go.mod commit d1d231f Author: Yasuhiro Matsumoto <mattn.jp@gmail.com> Date: Fri Nov 8 21:35:58 2019 +0900 Fix go.mod commit 9ecd7d0 Author: Olivier Wulveryck <owulveryck@users.noreply.github.com> Date: Thu Nov 7 09:59:37 2019 +0100 Gap operator (#302) * feat(wip): scratch space for a Global Average Pooling operator * chore: skeleton of the operator * feat: Global Average Pool commit 6cc7466 Author: mattn <mattn.jp@gmail.com> Date: Sat Nov 2 03:16:02 2019 +0900 Improvement of example/iris (#348) commit 6f8c10a Author: Olivier Wulveryck <owulveryck@users.noreply.github.com> Date: Thu Oct 31 22:10:37 2019 +0100 Iris example (#347) * fix: do not overwrite the channel if it already exists * feat: multivariate linear regression commit b7b4b2c Author: Olivier Wulveryck <owulveryck@users.noreply.github.com> Date: Wed Oct 16 15:34:26 2019 +0200 Create FUNDING.yml (#342) * Fixed Softmax
init of the encoding/dot package (#309) * feat: dot representation of a graph as an external package * feat(encoding): a grouper is an object that can hold several groups * feat: first round of a an encoding package * chore: the encoding package is now internal * fix: withGroup is unexported (by now). the function is using an internal package. It is useless to export it by now. This may change when the encoding package is stable enough * fix(groups): a node is assumed to be in the ExprGraph group * fix(groups): node is a Grouper; subgraphs are generated * chore: all the nodes generated in the Conv2d are part of a group * feat(subgraphs): first attempt to group nodes of the same operator * fix: escape the html name and substitute the curly braces * feat: generate the subgraph in a coral color * wip: a bunch of improvements * chore: add the grouping oprations for maxpool and rectify * chore: prepare the inceptions of graph by adding a "IsPrimary" flag
V0.9.0 working 2 (#227) * Added Issue 217 to known issues Renamed the ConstDeriv test to 182 * Added Reset() to BatchNormOp to conform to the same interface as the native version woul * Clean up, and plugged some leaks * Fixed a slight issue with the compilation. DoWork() is emitted before a linalg op now * [fix] The test for Reshape Operator was incorrect * Temporarily commented away the tests for #217 * Added error message for VM Genera * Added go1.11 to travis Fixed CUDA Conv backprop shape error * [fix] Using NewOrderedNodes as an implementation of gonum.Nodes * Fixed the tests to use the new gonum interface * Fixed an error in the error.warpf call * [fix] Back to the idea that graph options should not be exported * [fix] More idiomatic way to handle error msg * Merged from v0.9.2-working2 * New stuff? I don't know when I did this * Fixes #257 I think * Renamed examples/stacked autoencoder to examples/stacked_autoencoder * Added tests. Added better error support for LISPMachine. Removed sanity check for now because there are some weird things wrt dimensions in dual values * Updated with 237fix There seems to be a problem with the concurrent training example, so it has been commented out for now The `sanity()` method of the dual values have been temporarily suspended. Turns out lispMachines don't play well when there are differently shaped (but legal) derivations.
V0.9.0 (#195) Ongoing notes: * **CUDA**: Better CUDA support (IN PROGRESS) * ~ColMajor used by default if engine is CUDA.~ (ColMajor is supported, but defaults to using RowMajor for all the major cuBLAS versions. Careful reasoning of the parameters obviates the need for ColMajor by default, which causes more headaches. It is still supported) * Transposition will be automatically done when performing transports back to CPU. * cudnn operations supported (IN PROGRESS) (note: these are the ones I use more often hence gets bigger attention): * [x] Conv2d * [x] Dropout * [x] Maxpool2d * [x] BatchNorm * [x] Rectify * Other CUDA related optimizations * [x] full cuBLAS support * **New Ops**: * BatchNorm * InvSqrt * CUDA enabled ops in `ops/nn` (preview for how things will start to look in v0.10.0) * **New Features**: * Limited shape inference. Working towards a calculus for shapes (first raised in #96 and #97). * **Optimizations**: * Optimizations of basic ops to use engine functions if available, otherwise, fall back to using `Apply`, which adds a penalty from repeatedly calling functions. * Faster VMs (1 of 2 VMs): ~greedy goroutines grabs gigs from a priority queue. This causes faster execution of code in general.~ (this is moved to a future version of 0.9.xx): ``` benchmark old ns/op new ns/op delta BenchmarkTapeMachineExecution-8 3129074510 2695304022 -13.86% benchmark old allocs new allocs delta BenchmarkTapeMachineExecution-8 25745 25122 -2.42% benchmark old bytes new bytes delta BenchmarkTapeMachineExecution-8 4804578705 4803784111 -0.02% ```` * **Code generation**: some exported API is now auto generated * **New Solver** : @ynqa added the Momentum solver. * **Breaking API**: `Solver` now take a slice of `ValueGrad` instead of `Nodes`. `ValueGrad` is an interface, of which a `*Node` fulfils. An additional utility function `NodesToValueGrads` has been added to aid with refactoring. This was done for two reasons: * ~The support for BatchNorm operation, which is a verily impure and highly stateful function. The BatchNorm Op has internal states that need to have their gradients updated as well. But the internal state of BatchNorm isn't really part of the expression graph, and really it shouldn't be.~ Turns out there was a better API for `BatchNorm`. * In the next version, v0.10.0. We aim to do [better package organization](#91) for managability. With this API breaking change, the solver now is less dependent on the other parts of Gorgonia and can be easily separated. * **Breaking Semantics**: A `gorgonia.VM` now implements `io.Closer`. It should be treated as a resource as well as a computation device - the VM must be `Close()`d in order for the resources acquired by the VM to actually be released. Turns out, automatic resource management is too difficult. Who'd thunk that?