default search action
Shiwei Liu 0003
Person information
- affiliation: University of Oxford, Mathematical Institute, UK
- affiliation: University of Texas at Austin, TX, USA
- affiliation (PhD): Eindhoven University of Technology, Eindhoven, The Netherlands
Other persons with the same name
- Shiwei Liu (aka: Shi-Wei Liu) — disambiguation page
- Shiwei Liu 0001 — Huazhong Agricultural University, College of Engineering, Wuhan, China
- Shiwei Liu 0002 — Fudan University, State Key Laboratory of Integrated Chips and Systems, Shanghai, China
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c36]Yuxin Zhang, Lirui Zhao, Mingbao Lin, Yunyun Sun, Yiwu Yao, Xingjia Han, Jared Tanner, Shiwei Liu, Rongrong Ji:
Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLMs. ICLR 2024 - [c35]Gen Li, Lu Yin, Jie Ji, Wei Niu, Minghai Qin, Bin Ren, Linke Guo, Shiwei Liu, Xiaolong Ma:
NeurRev: Train Better Sparse Neural Network Practically via Neuron Revitalization. ICLR 2024 - [c34]Enneng Yang, Zhenyi Wang, Li Shen, Shiwei Liu, Guibing Guo, Xingwei Wang, Dacheng Tao:
AdaMerging: Adaptive Model Merging for Multi-Task Learning. ICLR 2024 - [c33]Yuxin Zhang, Yuxuan Du, Gen Luo, Yunshan Zhong, Zhenyu Zhang, Shiwei Liu, Rongrong Ji:
CaM: Cache Merging for Memory-efficient LLMs Inference. ICML 2024 - [c32]Lu Yin, Ajay Kumar Jaiswal, Shiwei Liu, Souvik Kundu, Zhangyang Wang:
Junk DNA Hypothesis: Pruning Small Pre-Trained Weights Irreversibly and Monotonically Impairs "Difficult" Downstream Tasks in LLMs. ICML 2024 - [c31]Lu Yin, You Wu, Zhenyu Zhang, Cheng-Yu Hsieh, Yaqing Wang, Yiling Jia, Gen Li, Ajay Kumar Jaiswal, Mykola Pechenizkiy, Yi Liang, Michael Bendersky, Zhangyang Wang, Shiwei Liu:
Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity. ICML 2024 - [c30]Jie Ji, Gen Li, Lu Yin, Minghai Qin, Geng Yuan, Linke Guo, Shiwei Liu, Xiaolong Ma:
Advancing Dynamic Sparse Training by Exploring Optimization Opportunities. ICML 2024 - [c29]Zhangheng Li, Shiwei Liu, Tianlong Chen, Ajay Kumar Jaiswal, Zhenyu Zhang, Dilin Wang, Raghuraman Krishnamoorthi, Shiyu Chang, Zhangyang Wang:
Sparse Cocktail: Every Sparse Pattern Every Sparse Ratio All At Once. ICML 2024 - [c28]Zhenyu Zhang, Shiwei Liu, Runjin Chen, Bhavya Kailkhura, Beidi Chen, Atlas Wang:
Q-Hitter: A Better Token Oracle for Efficient LLM Inference via Sparse-Quantized KV Cache. MLSys 2024 - [i45]Zhenyu Zhang, Runjin Chen, Shiwei Liu, Zhewei Yao, Olatunji Ruwase, Beidi Chen, Xiaoxia Wu, Zhangyang Wang:
Found in the Middle: How Language Models Use Long Contexts Better via Plug-and-Play Positional Encoding. CoRR abs/2403.04797 (2024) - [i44]Ajay Jaiswal, Bodun Hu, Lu Yin, Yeonju Ro, Shiwei Liu, Tianlong Chen, Aditya Akella:
FFN-SkipLLM: A Hidden Gem for Autoregressive Decoding with Adaptive Feed Forward Skipping. CoRR abs/2404.03865 (2024) - [i43]Pengxiang Li, Lu Yin, Xiaowei Gao, Shiwei Liu:
OwLore: Outlier-weighed Layerwise Sampled Low-Rank Projection for Memory-Efficient LLM Fine-tuning. CoRR abs/2405.18380 (2024) - [i42]Anke Tang, Li Shen, Yong Luo, Shiwei Liu, Han Hu, Bo Du:
Towards Efficient Pareto Set Approximation via Mixture of Experts Based Model Fusion. CoRR abs/2406.09770 (2024) - [i41]Adriana Fernandez-Lopez, Honglie Chen, Pingchuan Ma, Lu Yin, Qiao Xiao, Stavros Petridis, Shiwei Liu, Maja Pantic:
MSRS: Training Multimodal Speech Recognition Models from Scratch with Sparse Mask Optimization. CoRR abs/2406.17614 (2024) - [i40]Qiao Xiao, Pingchuan Ma, Adriana Fernandez-Lopez, Boqian Wu, Lu Yin, Stavros Petridis, Mykola Pechenizkiy, Maja Pantic, Decebal Constantin Mocanu, Shiwei Liu:
Dynamic Data Pruning for Automatic Speech Recognition. CoRR abs/2406.18373 (2024) - [i39]Arinbjorn Kolbeinsson, Kyle O'Brien, Tianjin Huang, Shanghua Gao, Shiwei Liu, Jonathan Richard Schwarz, Anurag Vaidya, Faisal Mahmood, Marinka Zitnik, Tianlong Chen, Thomas Hartvigsen:
Composable Interventions for Language Models. CoRR abs/2407.06483 (2024) - [i38]Zhenyu Zhang, Ajay Jaiswal, Lu Yin, Shiwei Liu, Jiawei Zhao, Yuandong Tian, Zhangyang Wang:
Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients. CoRR abs/2407.08296 (2024) - [i37]Ajay Jaiswal, Lu Yin, Zhenyu Zhang, Shiwei Liu, Jiawei Zhao, Yuandong Tian, Zhangyang Wang:
From GaLore to WeLore: How Low-Rank Weights Non-uniformly Emerge from Low-Rank Gradients. CoRR abs/2407.11239 (2024) - [i36]Tianjin Huang, Meng Fang, Li Shen, Fan Liu, Yulong Pei, Mykola Pechenizkiy, Shiwei Liu, Tianlong Chen:
(PASS) Visual Prompt Locates Good Structure Sparsity through a Recurrent HyperNetwork. CoRR abs/2407.17412 (2024) - 2023
- [j5]Shiwei Liu, Yuesong Tian, Tianlong Chen, Li Shen:
Don't Be So Dense: Sparse-to-Sparse GAN Training Without Sacrificing Performance. Int. J. Comput. Vis. 131(10): 2635-2648 (2023) - [j4]Zahra Atashgahi, Xuhao Zhang, Neil Kichler, Shiwei Liu, Lu Yin, Mykola Pechenizkiy, Raymond N. J. Veldhuis, Decebal Constantin Mocanu:
Supervised Feature Selection with Neuron Evolution in Sparse Neural Networks. Trans. Mach. Learn. Res. 2023 (2023) - [c27]Lu Yin, Shiwei Liu, Meng Fang, Tianjin Huang, Vlado Menkovski, Mykola Pechenizkiy:
Lottery Pools: Winning More by Interpolating Tickets without Increasing Training or Inference Cost. AAAI 2023: 10945-10953 - [c26]Ruisi Cai, Xiaohan Chen, Shiwei Liu, Jayanth Srinivasa, Myungjin Lee, Ramana Kompella, Zhangyang Wang:
Many-Task Federated Learning: A New Problem Setting and A Simple Baseline. CVPR Workshops 2023: 5037-5045 - [c25]Enneng Yang, Li Shen, Zhenyi Wang, Shiwei Liu, Guibing Guo, Xingwei Wang:
Data Augmented Flatness-aware Gradient Projection for Continual Learning. ICCV 2023: 5607-5616 - [c24]Tianlong Chen, Zhenyu Zhang, Ajay Kumar Jaiswal, Shiwei Liu, Zhangyang Wang:
Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers. ICLR 2023 - [c23]Duc N. M. Hoang, Shiwei Liu, Radu Marculescu, Zhangyang Wang:
Revisiting Pruning at Initialization Through the Lens of Ramanujan Graph. ICLR 2023 - [c22]Shiwei Liu, Tianlong Chen, Xiaohan Chen, Xuxi Chen, Qiao Xiao, Boqian Wu, Tommi Kärkkäinen, Mykola Pechenizkiy, Decebal Constantin Mocanu, Zhangyang Wang:
More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 using Sparsity. ICLR 2023 - [c21]Shiwei Liu, Tianlong Chen, Zhenyu Zhang, Xuxi Chen, Tianjin Huang, Ajay Kumar Jaiswal, Zhangyang Wang:
Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together! ICLR 2023 - [c20]Tianjin Huang, Lu Yin, Zhenyu Zhang, Li Shen, Meng Fang, Mykola Pechenizkiy, Zhangyang Wang, Shiwei Liu:
Are Large Kernels Better Teachers than Transformers for ConvNets? ICML 2023: 14023-14038 - [c19]Ajay Kumar Jaiswal, Shiwei Liu, Tianlong Chen, Ying Ding, Zhangyang Wang:
Graph Ladling: Shockingly Simple Parallel GNN Training without Intermediate Communication. ICML 2023: 14679-14690 - [c18]Ajay Kumar Jaiswal, Shiwei Liu, Tianlong Chen, Ying Ding, Zhangyang Wang:
Instant Soup: Cheap Pruning Ensembles in A Single Pass Can Draw Lottery Tickets from Large Models. ICML 2023: 14691-14701 - [c17]Lu Yin, Gen Li, Meng Fang, Li Shen, Tianjin Huang, Zhangyang Wang, Vlado Menkovski, Xiaolong Ma, Mykola Pechenizkiy, Shiwei Liu:
Dynamic Sparsity Is Channel-Level Sparsity Learner. NeurIPS 2023 - [c16]Duc Hoang, Souvik Kundu, Shiwei Liu, Zhangyang Wang:
Don't just prune by magnitude! Your mask topology is a secret weapon. NeurIPS 2023 - [c15]Ajay Jaiswal, Shiwei Liu, Tianlong Chen, Zhangyang Wang:
The Emergence of Essential Sparsity in Large Pre-trained Models: The Weights that Matter. NeurIPS 2023 - [c14]Hoang Pham, The-Anh Ta, Shiwei Liu, Lichuan Xiang, Dung Le, Hongkai Wen, Long Tran-Thanh:
Towards Data-Agnostic Pruning At Initialization: What Makes a Good Sparse Mask? NeurIPS 2023 - [c13]Tianjin Huang, Shiwei Liu, Tianlong Chen, Meng Fang, Li Shen, Vlado Menkovski, Lu Yin, Yulong Pei, Mykola Pechenizkiy:
Enhancing Adversarial Training via Reweighting Optimization Trajectory. ECML/PKDD (1) 2023: 113-130 - [c12]Jiaxu Zhao, Lu Yin, Shiwei Liu, Meng Fang, Mykola Pechenizkiy:
REST: Enhancing Group Robustness in DNNs Through Reweighted Sparse Training. ECML/PKDD (2) 2023: 313-329 - [i35]Shiwei Liu, Zhangyang Wang:
Ten Lessons We Have Learned in the New "Sparseland": A Short Handbook for Sparse Neural Network Researchers. CoRR abs/2302.02596 (2023) - [i34]Tianlong Chen, Zhenyu Zhang, Ajay Jaiswal, Shiwei Liu, Zhangyang Wang:
Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers. CoRR abs/2303.01610 (2023) - [i33]Shiwei Liu, Tianlong Chen, Zhenyu Zhang, Xuxi Chen, Tianjin Huang, Ajay Jaiswal, Zhangyang Wang:
Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together! CoRR abs/2303.02141 (2023) - [i32]Zahra Atashgahi, Xuhao Zhang, Neil Kichler, Shiwei Liu, Lu Yin, Mykola Pechenizkiy, Raymond N. J. Veldhuis, Decebal Constantin Mocanu:
Supervised Feature Selection with Neuron Evolution in Sparse Neural Networks. CoRR abs/2303.07200 (2023) - [i31]Tianjin Huang, Lu Yin, Zhenyu Zhang, Li Shen, Meng Fang, Mykola Pechenizkiy, Zhangyang Wang, Shiwei Liu:
Are Large Kernels Better Teachers than Transformers for ConvNets? CoRR abs/2305.19412 (2023) - [i30]Lu Yin, Gen Li, Meng Fang, Li Shen, Tianjin Huang, Zhangyang Wang, Vlado Menkovski, Xiaolong Ma, Mykola Pechenizkiy, Shiwei Liu:
Dynamic Sparsity Is Channel-Level Sparsity Learner. CoRR abs/2305.19454 (2023) - [i29]Ajay Jaiswal, Shiwei Liu, Tianlong Chen, Zhangyang Wang:
The Emergence of Essential Sparsity in Large Pre-trained Models: The Weights that Matter. CoRR abs/2306.03805 (2023) - [i28]Ajay Jaiswal, Shiwei Liu, Tianlong Chen, Ying Ding, Zhangyang Wang:
Instant Soup: Cheap Pruning Ensembles in A Single Pass Can Draw Lottery Tickets from Large Models. CoRR abs/2306.10460 (2023) - [i27]Ajay Jaiswal, Shiwei Liu, Tianlong Chen, Ying Ding, Zhangyang Wang:
Graph Ladling: Shockingly Simple Parallel GNN Training without Intermediate Communication. CoRR abs/2306.10466 (2023) - [i26]Tianjin Huang, Shiwei Liu, Tianlong Chen, Meng Fang, Li Shen, Vlado Menkovski, Lu Yin, Yulong Pei, Mykola Pechenizkiy:
Enhancing Adversarial Training via Reweighting Optimization Trajectory. CoRR abs/2306.14275 (2023) - [i25]Lu Yin, Shiwei Liu, Ajay Jaiswal, Souvik Kundu, Zhangyang Wang:
Junk DNA Hypothesis: A Task-Centric Angle of LLM Pre-trained Weights through Sparsity. CoRR abs/2310.02277 (2023) - [i24]Enneng Yang, Zhenyi Wang, Li Shen, Shiwei Liu, Guibing Guo, Xingwei Wang, Dacheng Tao:
AdaMerging: Adaptive Model Merging for Multi-Task Learning. CoRR abs/2310.02575 (2023) - [i23]Lu Yin, You Wu, Zhenyu Zhang, Cheng-Yu Hsieh, Yaqing Wang, Yiling Jia, Mykola Pechenizkiy, Yi Liang, Zhangyang Wang, Shiwei Liu:
Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity. CoRR abs/2310.05175 (2023) - [i22]Yuxin Zhang, Lirui Zhao, Mingbao Lin, Yunyun Sun, Yiwu Yao, Xingjia Han, Jared Tanner, Shiwei Liu, Rongrong Ji:
Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLMs. CoRR abs/2310.08915 (2023) - [i21]Can Jin, Tianjin Huang, Yihua Zhang, Mykola Pechenizkiy, Sijia Liu, Shiwei Liu, Tianlong Chen:
Visual Prompting Upgrades Neural Network Sparsification: A Data-Model Perspective. CoRR abs/2312.01397 (2023) - [i20]Jiaxu Zhao, Lu Yin, Shiwei Liu, Meng Fang, Mykola Pechenizkiy:
REST: Enhancing Group Robustness in DNNs through Reweighted Sparse Training. CoRR abs/2312.03044 (2023) - [i19]Boqian Wu, Qiao Xiao, Shiwei Liu, Lu Yin, Mykola Pechenizkiy, Decebal Constantin Mocanu, Maurice van Keulen, Elena Mocanu:
E2ENet: Dynamic Sparse Feature Fusion for Accurate and Efficient 3D Medical Image Segmentation. CoRR abs/2312.04727 (2023) - [i18]Tianjin Huang, Tianlong Chen, Zhangyang Wang, Shiwei Liu:
The Counterattack of CNNs in Self-Supervised Learning: Larger Kernel Size might be All You Need. CoRR abs/2312.05695 (2023) - 2022
- [j3]Zahra Atashgahi, Joost Pieterse, Shiwei Liu, Decebal Constantin Mocanu, Raymond N. J. Veldhuis, Mykola Pechenizkiy:
A brain-inspired algorithm for training highly sparse neural networks. Mach. Learn. 111(12): 4411-4452 (2022) - [c11]Shiwei Liu, Tianlong Chen, Zahra Atashgahi, Xiaohan Chen, Ghada Sokar, Elena Mocanu, Mykola Pechenizkiy, Zhangyang Wang, Decebal Constantin Mocanu:
Deep Ensembling with No Overhead for either Training or Testing: The All-Round Blessings of Dynamic Sparsity. ICLR 2022 - [c10]Shiwei Liu, Tianlong Chen, Xiaohan Chen, Li Shen, Decebal Constantin Mocanu, Zhangyang Wang, Mykola Pechenizkiy:
The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training. ICLR 2022 - [c9]Tianjin Huang, Tianlong Chen, Meng Fang, Vlado Menkovski, Jiaxu Zhao, Lu Yin, Yulong Pei, Decebal Constantin Mocanu, Zhangyang Wang, Mykola Pechenizkiy, Shiwei Liu:
You Can Have Better Graph Neural Networks by Not Training Weights at All: Finding Untrained GNNs Tickets. LoG 2022: 8 - [c8]Qiao Xiao, Boqian Wu, Yu Zhang, Shiwei Liu, Mykola Pechenizkiy, Elena Mocanu, Decebal Constantin Mocanu:
Dynamic Sparse Network for Time Series Classification: Learning What to "See". NeurIPS 2022 - [i17]Tiansheng Huang, Shiwei Liu, Li Shen, Fengxiang He, Weiwei Lin, Dacheng Tao:
Achieving Personalized Federated Learning with Sparse Local Models. CoRR abs/2201.11380 (2022) - [i16]Shiwei Liu, Tianlong Chen, Xiaohan Chen, Li Shen, Decebal Constantin Mocanu, Zhangyang Wang, Mykola Pechenizkiy:
The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training. CoRR abs/2202.02643 (2022) - [i15]Shiwei Liu, Yuesong Tian, Tianlong Chen, Li Shen:
Don't Be So Dense: Sparse-to-Sparse GAN Training Without Sacrificing Performance. CoRR abs/2203.02770 (2022) - [i14]Lu Yin, Vlado Menkovski, Meng Fang, Tianjin Huang, Yulong Pei, Mykola Pechenizkiy, Decebal Constantin Mocanu, Shiwei Liu:
Superposing Many Tickets into One: A Performance Booster for Sparse Neural Network Training. CoRR abs/2205.15322 (2022) - [i13]Shiwei Liu, Tianlong Chen, Xiaohan Chen, Xuxi Chen, Qiao Xiao, Boqian Wu, Mykola Pechenizkiy, Decebal Constantin Mocanu, Zhangyang Wang:
More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 using Sparsity. CoRR abs/2207.03620 (2022) - [i12]Lu Yin, Shiwei Liu, Meng Fang, Tianjin Huang, Vlado Menkovski, Mykola Pechenizkiy:
Lottery Pools: Winning More by Interpolating Tickets without Increasing Training or Inference Cost. CoRR abs/2208.10842 (2022) - [i11]Tianjin Huang, Tianlong Chen, Meng Fang, Vlado Menkovski, Jiaxu Zhao, Lu Yin, Yulong Pei, Decebal Constantin Mocanu, Zhangyang Wang, Mykola Pechenizkiy, Shiwei Liu:
You Can Have Better Graph Neural Networks by Not Training Weights at All: Finding Untrained GNNs Tickets. CoRR abs/2211.15335 (2022) - [i10]Qiao Xiao, Boqian Wu, Yu Zhang, Shiwei Liu, Mykola Pechenizkiy, Elena Mocanu, Decebal Constantin Mocanu:
Dynamic Sparse Network for Time Series Classification: Learning What to "see". CoRR abs/2212.09840 (2022) - 2021
- [j2]Shiwei Liu, Decebal Constantin Mocanu, Amarsagar Reddy Ramapuram Matavalam, Yulong Pei, Mykola Pechenizkiy:
Sparse evolutionary deep learning with over one million artificial neurons on commodity hardware. Neural Comput. Appl. 33(7): 2589-2604 (2021) - [j1]Shiwei Liu, Iftitahu Ni'mah, Vlado Menkovski, Decebal Constantin Mocanu, Mykola Pechenizkiy:
Efficient and effective training of sparse recurrent neural networks. Neural Comput. Appl. 33(15): 9625-9636 (2021) - [c7]Lu Yin, Vlado Menkovski, Shiwei Liu, Mykola Pechenizkiy:
Hierarchical Semantic Segmentation using Psychometric Learning. ACML 2021: 798-813 - [c6]Shiwei Liu, Decebal Constantin Mocanu, Yulong Pei, Mykola Pechenizkiy:
Selfish Sparse RNN Training. ICML 2021: 6893-6904 - [c5]Shiwei Liu, Lu Yin, Decebal Constantin Mocanu, Mykola Pechenizkiy:
Do We Actually Need Dense Over-Parameterization? In-Time Over-Parameterization in Sparse Training. ICML 2021: 6989-7000 - [c4]Shiwei Liu, Tianlong Chen, Xiaohan Chen, Zahra Atashgahi, Lu Yin, Huanyu Kou, Li Shen, Mykola Pechenizkiy, Zhangyang Wang, Decebal Constantin Mocanu:
Sparse Training via Boosting Pruning Plasticity with Neuroregeneration. NeurIPS 2021: 9908-9922 - [i9]Shiwei Liu, Decebal Constantin Mocanu, Yulong Pei, Mykola Pechenizkiy:
Selfish Sparse RNN Training. CoRR abs/2101.09048 (2021) - [i8]Shiwei Liu, Lu Yin, Decebal Constantin Mocanu, Mykola Pechenizkiy:
Do We Actually Need Dense Over-Parameterization? In-Time Over-Parameterization in Sparse Training. CoRR abs/2102.02887 (2021) - [i7]Shiwei Liu, Tianlong Chen, Xiaohan Chen, Zahra Atashgahi, Lu Yin, Huanyu Kou, Li Shen, Mykola Pechenizkiy, Zhangyang Wang, Decebal Constantin Mocanu:
Sparse Training via Boosting Pruning Plasticity with Neuroregeneration. CoRR abs/2106.10404 (2021) - [i6]Shiwei Liu, Tianlong Chen, Zahra Atashgahi, Xiaohan Chen, Ghada Sokar, Elena Mocanu, Mykola Pechenizkiy, Zhangyang Wang, Decebal Constantin Mocanu:
FreeTickets: Accurate, Robust and Efficient Deep Ensemble by Training with Dynamic Sparsity. CoRR abs/2106.14568 (2021) - [i5]Lu Yin, Vlado Menkovski, Shiwei Liu, Mykola Pechenizkiy:
Hierarchical Semantic Segmentation using Psychometric Learning. CoRR abs/2107.03212 (2021) - 2020
- [c3]Fulong Yan, Shiwei Liu, Nicola Calabretta:
Network Performance Optimization with Real Time Traffic Prediction in Data Center Network. ECOC 2020: 1-4 - [c2]Shiwei Liu:
Learning Sparse Neural Networks for Better Generalization. IJCAI 2020: 5190-5191 - [c1]Shiwei Liu, Tim van der Lee, Anil Yaman, Zahra Atashgahi, Davide Ferraro, Ghada Sokar, Mykola Pechenizkiy, Decebal Constantin Mocanu:
Topological Insights into Sparse Neural Networks. ECML/PKDD (3) 2020: 279-294 - [i4]Shiwei Liu, Tim van der Lee, Anil Yaman, Zahra Atashgahi, Davide Ferraro, Ghada Sokar, Mykola Pechenizkiy, Decebal Constantin Mocanu:
Topological Insights in Sparse Neural Networks. CoRR abs/2006.14085 (2020)
2010 – 2019
- 2019
- [i3]Shiwei Liu, Decebal Constantin Mocanu, Amarsagar Reddy Ramapuram Matavalam, Yulong Pei, Mykola Pechenizkiy:
Sparse evolutionary Deep Learning with over one million artificial neurons on commodity hardware. CoRR abs/1901.09181 (2019) - [i2]Shiwei Liu, Decebal Constantin Mocanu, Mykola Pechenizkiy:
Intrinsically Sparse Long Short-Term Memory Networks. CoRR abs/1901.09208 (2019) - [i1]Shiwei Liu, Decebal Constantin Mocanu, Mykola Pechenizkiy:
On improving deep learning generalization with adaptive sparse connectivity. CoRR abs/1906.11626 (2019)
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-07 21:20 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint