{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,6]],"date-time":"2026-03-06T15:44:51Z","timestamp":1772811891880,"version":"3.50.1"},"reference-count":63,"publisher":"Springer Science and Business Media LLC","issue":"7","license":[{"start":{"date-parts":[[2025,7,11]],"date-time":"2025-07-11T00:00:00Z","timestamp":1752192000000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,7,11]],"date-time":"2025-07-11T00:00:00Z","timestamp":1752192000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100001691","name":"MEXT | Japan Society for the Promotion of Science","doi-asserted-by":"publisher","award":["KAKENHI grants JP20H05705 and JP20H05954 to Y.K."],"award-info":[{"award-number":["KAKENHI grants JP20H05705 and JP20H05954 to Y.K."]}],"id":[{"id":"10.13039\/501100001691","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001691","name":"MEXT | Japan Society for the Promotion of Science","doi-asserted-by":"publisher","award":["KAKENHI grants JP20H05705 and JP20H05954 to Y.K."],"award-info":[{"award-number":["KAKENHI grants JP20H05705 and JP20H05954 to Y.K."]}],"id":[{"id":"10.13039\/501100001691","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/100009619","name":"Japan Agency for Medical Research and Development","doi-asserted-by":"publisher","award":["grant JP24wm0625409 to Y.K."],"award-info":[{"award-number":["grant JP24wm0625409 to Y.K."]}],"id":[{"id":"10.13039\/100009619","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/100009619","name":"Japan Agency for Medical Research and Development","doi-asserted-by":"publisher","award":["grant JP24wm0625409 to Y.K."],"award-info":[{"award-number":["grant JP24wm0625409 to Y.K."]}],"id":[{"id":"10.13039\/100009619","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100002241","name":"MEXT | Japan Science and Technology Agency","doi-asserted-by":"publisher","award":["CREST grants JPMJCR18A5 and JPMJCR22P3 to Y.K."],"award-info":[{"award-number":["CREST grants JPMJCR18A5 and JPMJCR22P3 to Y.K."]}],"id":[{"id":"10.13039\/501100002241","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100002241","name":"MEXT | Japan Science and Technology Agency","doi-asserted-by":"publisher","award":["CREST grants JPMJCR18A5 and JPMJCR22P3 to Y.K."],"award-info":[{"award-number":["CREST grants JPMJCR18A5 and JPMJCR22P3 to Y.K."]}],"id":[{"id":"10.13039\/501100002241","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001863","name":"New Energy and Industrial Technology Development Organization","doi-asserted-by":"publisher","award":["grant JPNP20006 to Y.K."],"award-info":[{"award-number":["grant JPNP20006 to Y.K."]}],"id":[{"id":"10.13039\/501100001863","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001863","name":"New Energy and Industrial Technology Development Organization","doi-asserted-by":"publisher","award":["grant JPNP20006 to Y.K."],"award-info":[{"award-number":["grant JPNP20006 to Y.K."]}],"id":[{"id":"10.13039\/501100001863","id-type":"DOI","asserted-by":"publisher"}]},{"name":"Guardian Robot Project, RIKEN"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Nat Comput Sci"],"abstract":"<jats:title>Abstract<\/jats:title>\n          <jats:p>Inter-individual variability in fine-grained functional topographies poses challenges for scalable data analysis and modeling. Functional alignment techniques can help mitigate these individual differences but they typically require paired brain data with the same stimuli between individuals, which are often unavailable. Here we present a neural code conversion method that overcomes this constraint by optimizing conversion parameters based on the discrepancy between the stimulus contents represented by original and converted brain activity patterns. This approach, combined with hierarchical features of deep neural networks as latent content representations, achieves conversion accuracies that are comparable with methods using shared stimuli. The converted brain activity from a source subject can be accurately decoded using the target\u2019s pre-trained decoders, producing high-quality visual image reconstructions that rival within-individual decoding, even with data across different sites and limited training samples. Our approach offers a promising framework for scalable neural data analysis and modeling and a foundation for brain-to-brain communication.<\/jats:p>","DOI":"10.1038\/s43588-025-00826-5","type":"journal-article","created":{"date-parts":[[2025,7,11]],"date-time":"2025-07-11T09:03:13Z","timestamp":1752224593000},"page":"534-546","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":2,"title":["Inter-individual and inter-site neural code conversion without shared stimuli"],"prefix":"10.1038","volume":"5","author":[{"ORCID":"https:\/\/orcid.org\/0009-0001-9031-8412","authenticated-orcid":false,"given":"Haibao","family":"Wang","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0001-9474-3427","authenticated-orcid":false,"given":"Jun Kai","family":"Ho","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-0949-2406","authenticated-orcid":false,"given":"Fan L.","family":"Cheng","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6384-8098","authenticated-orcid":false,"given":"Shuntaro C.","family":"Aoki","sequence":"additional","affiliation":[]},{"given":"Yusuke","family":"Muraki","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5633-5630","authenticated-orcid":false,"given":"Misato","family":"Tanaka","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0009-0001-7013-2098","authenticated-orcid":false,"given":"Jong-Yun","family":"Park","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9300-8268","authenticated-orcid":false,"given":"Yukiyasu","family":"Kamitani","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,7,11]]},"reference":[{"key":"826_CR1","doi-asserted-by":"publisher","first-page":"1973","DOI":"10.1093\/cercor\/bhm225","volume":"18","author":"B Fischl","year":"2008","unstructured":"Fischl, B. et al. Cortical folding patterns and predicting cytoarchitecture. Cereb. Cortex 18, 1973\u20131980 (2008).","journal-title":"Cereb. Cortex"},{"key":"826_CR2","doi-asserted-by":"publisher","first-page":"S97","DOI":"10.1016\/j.neuroimage.2004.07.024","volume":"23","author":"DC Van Essen","year":"2004","unstructured":"Van Essen, D. C. Surface-based approaches to spatial localization and registration in primate cerebral cortex. Neuroimage 23, S97\u2013S107 (2004).","journal-title":"Neuroimage"},{"key":"826_CR3","doi-asserted-by":"publisher","first-page":"635","DOI":"10.1016\/j.neuroimage.2005.06.058","volume":"28","author":"DC Van Essen","year":"2005","unstructured":"Van Essen, D. C. A population-average, landmark-and surface-based (PALS) atlas of human cerebral cortex. Neuroimage 28, 635\u2013662 (2005).","journal-title":"Neuroimage"},{"key":"826_CR4","doi-asserted-by":"publisher","first-page":"261","DOI":"10.1016\/S1053-8119(03)00049-1","volume":"19","author":"DD Cox","year":"2003","unstructured":"Cox, D. D. & Savoy, R. L. Functional magnetic resonance imaging (fMRI) \u201cbrain reading\u201d: detecting and classifying distributed patterns of fMRI activity in human visual cortex. Neuroimage 19, 261\u2013270 (2003).","journal-title":"Neuroimage"},{"key":"826_CR5","doi-asserted-by":"publisher","first-page":"2919","DOI":"10.1093\/cercor\/bhw068","volume":"26","author":"JS Guntupalli","year":"2016","unstructured":"Guntupalli, J. S. et al. A model of representational spaces in human cortex. Cereb. Cortex 26, 2919\u20132934 (2016).","journal-title":"Cereb. Cortex"},{"key":"826_CR6","doi-asserted-by":"publisher","first-page":"404","DOI":"10.1016\/j.neuron.2011.08.026","volume":"72","author":"JV Haxby","year":"2011","unstructured":"Haxby, J. V. et al. A common, high-dimensional model of the representational space in human ventral temporal cortex. Neuron 72, 404\u2013416 (2011).","journal-title":"Neuron"},{"key":"826_CR7","doi-asserted-by":"publisher","first-page":"e56601","DOI":"10.7554\/eLife.56601","volume":"9","author":"JV Haxby","year":"2020","unstructured":"Haxby, J. V., Guntupalli, J. S., Nastase, S. A. & Feilong, M. Hyperalignment: modeling shared information encoded in idiosyncratic cortical topographies. eLife 9, e56601 (2020).","journal-title":"eLife"},{"key":"826_CR8","doi-asserted-by":"publisher","first-page":"289","DOI":"10.1016\/j.neuroimage.2015.03.059","volume":"113","author":"K Yamada","year":"2015","unstructured":"Yamada, K., Miyawaki, Y. & Kamitani, Y. Inter-subject neural code converter for visual image representation. NeuroImage 113, 289\u2013297 (2015).","journal-title":"NeuroImage"},{"key":"826_CR9","unstructured":"Chen, P.-H. C. et al. A reduced-dimension fMRI shared response model. In Advances in Neural Information Processing Systems Vol. 28 (NeurIPS, 2015)."},{"key":"826_CR10","doi-asserted-by":"publisher","first-page":"118683","DOI":"10.1016\/j.neuroimage.2021.118683","volume":"245","author":"T Bazeille","year":"2021","unstructured":"Bazeille, T., Dupre, E., Richard, H., Poline, J.-B. & Thirion, B. An empirical evaluation of functional alignment using inter-subject decoding. NeuroImage 245, 118683 (2021).","journal-title":"NeuroImage"},{"key":"826_CR11","doi-asserted-by":"publisher","first-page":"120007","DOI":"10.1016\/j.neuroimage.2023.120007","volume":"271","author":"JK Ho","year":"2023","unstructured":"Ho, J. K., Horikawa, T., Majima, K., Cheng, F. & Kamitani, Y. Inter-individual deep image reconstruction via hierarchical neural code conversion. NeuroImage 271, 120007 (2023).","journal-title":"NeuroImage"},{"key":"826_CR12","unstructured":"Krizhevsky, A., Sutskever, I. & Hinton, G. E. ImageNet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems Vol. 25 (NeurIPS, 2012)."},{"key":"826_CR13","unstructured":"Simonyan, K. & Zisserman, A. Very deep convolutional networks for large-scale image recognition. Preprint at https:\/\/arxiv.org\/abs\/1409.1556 (2014)."},{"key":"826_CR14","unstructured":"Radford, A. et al. Learning transferable visual models from natural language supervision. In Proc. 38th International Conference on Machine Learning Vol. 139, 8748\u20138763 (PMLR, 2021)."},{"key":"826_CR15","doi-asserted-by":"publisher","first-page":"915","DOI":"10.1016\/j.neuron.2008.11.004","volume":"60","author":"Y Miyawaki","year":"2008","unstructured":"Miyawaki, Y. et al. Visual image reconstruction from human brain activity using a combination of multiscale local image decoders. Neuron 60, 915\u2013929 (2008).","journal-title":"Neuron"},{"key":"826_CR16","doi-asserted-by":"publisher","DOI":"10.1038\/ncomms15037","volume":"8","author":"T Horikawa","year":"2017","unstructured":"Horikawa, T. & Kamitani, Y. Generic decoding of seen and imagined objects using hierarchical visual features. Nat. Commun. 8, 15037 (2017).","journal-title":"Nat. Commun."},{"key":"826_CR17","doi-asserted-by":"publisher","first-page":"432276","DOI":"10.3389\/fncom.2019.00021","volume":"13","author":"G Shen","year":"2019","unstructured":"Shen, G., Dwivedi, K., Majima, K., Horikawa, T. & Kamitani, Y. End-to-end deep image reconstruction from human brain activity. Front. Comput. Neurosci. 13, 432276 (2019).","journal-title":"Front. Comput. Neurosci."},{"key":"826_CR18","doi-asserted-by":"publisher","first-page":"e1006633","DOI":"10.1371\/journal.pcbi.1006633","volume":"15","author":"G Shen","year":"2019","unstructured":"Shen, G., Horikawa, T., Majima, K. & Kamitani, Y. Deep image reconstruction from human brain activity. PLoS Comput. Biol. 15, e1006633 (2019).","journal-title":"PLoS Comput. Biol."},{"key":"826_CR19","doi-asserted-by":"publisher","first-page":"eadj3906","DOI":"10.1126\/sciadv.adj3906","volume":"9","author":"FL Cheng","year":"2023","unstructured":"Cheng, F. L. et al. Reconstructing visual illusory experiences from human brain activity. Sci. Adv. 9, eadj3906 (2023).","journal-title":"Sci. Adv."},{"key":"826_CR20","doi-asserted-by":"publisher","first-page":"34","DOI":"10.1038\/s42003-021-02975-5","volume":"5","author":"T Horikawa","year":"2022","unstructured":"Horikawa, T. & Kamitani, Y. Attention modulates neural representation to render reconstructions according to subjective appearance. Commun. Biol. 5, 34 (2022).","journal-title":"Commun. Biol."},{"key":"826_CR21","doi-asserted-by":"publisher","first-page":"e82580","DOI":"10.7554\/eLife.82580","volume":"12","author":"MN Hebart","year":"2023","unstructured":"Hebart, M. N. et al. Things-data, a multimodal collection of large-scale datasets for investigating object representations in human brain and behavior. eLife 12, e82580 (2023).","journal-title":"eLife"},{"key":"826_CR22","doi-asserted-by":"publisher","first-page":"116","DOI":"10.1038\/s41593-021-00962-x","volume":"25","author":"EJ Allen","year":"2022","unstructured":"Allen, E. J. et al. A massive 7T fMRI dataset to bridge cognitive neuroscience and artificial intelligence. Nat. Neurosci. 25, 116\u2013126 (2022).","journal-title":"Nat. Neurosci."},{"key":"826_CR23","unstructured":"Park, J.-Y., Tsukamoto, M., Tanaka, M. & Kamitani, Y. Sound reconstruction from human brain activity via a generative model with brain-like auditory features. Preprint at https:\/\/arxiv.org\/abs\/2306.11629 (2023)."},{"key":"826_CR24","doi-asserted-by":"crossref","unstructured":"Iashin, V. & Rahtu, E. Taming visually guided sound generation. In Proc. 32nd British Machine Vision Conference https:\/\/www.bmvc2021-virtualconference.com\/assets\/papers\/1213.pdf (BMVC, 2021).","DOI":"10.5244\/C.35.336"},{"key":"826_CR25","doi-asserted-by":"publisher","first-page":"e2304085120","DOI":"10.1073\/pnas.2304085120","volume":"120","author":"G Jiahui","year":"2023","unstructured":"Jiahui, G. et al. Modeling naturalistic face processing in humans with deep convolutional neural networks. Proc. Natl Acad. Sci. USA 120, e2304085120 (2023).","journal-title":"Proc. Natl Acad. Sci. USA"},{"key":"826_CR26","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1162\/imag_a_00032","volume":"1","author":"M Feilong","year":"2023","unstructured":"Feilong, M. et al. The Individualized Neural Tuning Model: Precise and generalizable cartography of functional architecture in individual brains. Imaging Neurosci. 1, 1\u201334 (2023).","journal-title":"Imaging Neurosci."},{"key":"826_CR27","unstructured":"Wasserman, N., Beliy, R., Urbach, R. & Irani, M. Functional brain-to-brain transformation with no shared data. Preprint at https:\/\/arxiv.org\/abs\/2404.11143 (2024)."},{"key":"826_CR28","first-page":"667","volume":"14","author":"SA Nastase","year":"2019","unstructured":"Nastase, S. A., Gazzola, V., Hasson, U. & Keysers, C. Measuring shared responses across subjects using intersubject correlation. Social Cogn. Effective neuroscience 14, 667\u2013685 (2019).","journal-title":"Social Cogn. Effective neuroscience"},{"key":"826_CR29","unstructured":"Scotti, P. S. et al. MindEye2: shared-subject models enable fMRI-to-image with 1 hour of data. In Proc. 41st International Conference on Machine Learning Vol. 235, 44038\u201344059 (PMLR, 2024)."},{"key":"826_CR30","unstructured":"Zhou, Q., Du, C., Wang, S. & He, H. CLIP-MUSED: CLIP-guided multi-subject visual neural information semantic decoding. In Proc. 12th International Conference on Learning Representations https:\/\/openreview.net\/pdf\/c028064068fcf4f9af8e2dbdeddd65e4aecc1b9f.pdf (ICLR, 2024)."},{"key":"826_CR31","doi-asserted-by":"publisher","first-page":"1097","DOI":"10.1038\/s42256-023-00714-5","volume":"5","author":"A D\u00e9fossez","year":"2023","unstructured":"D\u00e9fossez, A., Caucheteux, C., Rapin, J., Kabeli, O. & King, J.-R. Decoding speech perception from non-invasive brain recordings. Nat. Mach. Intell. 5, 1097\u20131107 (2023).","journal-title":"Nat. Mach. Intell."},{"key":"826_CR32","unstructured":"Benchetrit, Y., Banville, H. & King, J.-R. Brain decoding: toward real-time reconstruction of visual perception. In Proc. 12th International Conference on Learning Representations https:\/\/openreview.net\/pdf?id=3y1K6buO8c (ICLR, 2024)"},{"key":"826_CR33","unstructured":"Azabou, M. et al. A unified, scalable framework for neural population decoding. In 37th Conference on Neural Information Processing Systems (NeurIPS 2023) https:\/\/openreview.net\/pdf?id=sw2Y0sirtM (NeurIPS, 2023)."},{"key":"826_CR34","doi-asserted-by":"crossref","unstructured":"Shirakawa, K. et al. Spurious reconstruction from brain activity. Neural Netw. 190, 107515 (2025).","DOI":"10.1016\/j.neunet.2025.107515"},{"key":"826_CR35","doi-asserted-by":"publisher","first-page":"284","DOI":"10.1364\/JOSAA.2.000284","volume":"2","author":"EH Adelson","year":"1985","unstructured":"Adelson, E. H. & Bergen, J. R. Spatiotemporal energy models for the perception of motion. JOSA A 2, 284\u2013299 (1985).","journal-title":"JOSA A"},{"key":"826_CR36","doi-asserted-by":"crossref","unstructured":"Rusu, R. B., Blodow, N. & Beetz, M. Fast point feature histograms (FPFH) for 3D registration. In 2009 IEEE International Conference on Robotics and Automation 3212\u20133217 (IEEE, 2009).","DOI":"10.1109\/ROBOT.2009.5152473"},{"key":"826_CR37","doi-asserted-by":"publisher","first-page":"174","DOI":"10.1038\/s41593-017-0054-4","volume":"21","author":"R Polan\u00eda","year":"2018","unstructured":"Polan\u00eda, R., Nitsche, M. A. & Ruff, C. C. Studying and modifying brain function with non-invasive brain stimulation. Nat. Neurosci. 21, 174\u2013187 (2018).","journal-title":"Nat. Neurosci."},{"key":"826_CR38","doi-asserted-by":"publisher","first-page":"148","DOI":"10.1038\/s41582-018-0128-2","volume":"15","author":"AM Lozano","year":"2019","unstructured":"Lozano, A. M. et al. Deep brain stimulation: current challenges and future directions. Nat. Rev. Neurol. 15, 148\u2013160 (2019).","journal-title":"Nat. Rev. Neurol."},{"key":"826_CR39","doi-asserted-by":"crossref","unstructured":"Deng, J. et al. Imagenet: a large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition 248\u2013255 (IEEE, 2009).","DOI":"10.1109\/CVPR.2009.5206848"},{"key":"826_CR40","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1038\/sdata.2019.12","volume":"6","author":"T Horikawa","year":"2019","unstructured":"Horikawa, T., Aoki, S. C., Tsukamoto, M. & Kamitani, Y. Characterization of deep neural network features by decodability from human brain activity. Sci. Data 6, 1\u201312 (2019).","journal-title":"Sci. Data"},{"key":"826_CR41","doi-asserted-by":"publisher","first-page":"525","DOI":"10.1038\/369525a0","volume":"369","author":"SA Engel","year":"1994","unstructured":"Engel, S. A. et al. fMRI of human visual cortex. Nature 369, 525\u2013525 (1994).","journal-title":"Nature"},{"key":"826_CR42","doi-asserted-by":"publisher","first-page":"889","DOI":"10.1126\/science.7754376","volume":"268","author":"MI Sereno","year":"1995","unstructured":"Sereno, M. I. et al. Borders of multiple visual areas in humans revealed by functional magnetic resonance imaging. Science 268, 889\u2013893 (1995).","journal-title":"Science"},{"key":"826_CR43","doi-asserted-by":"publisher","first-page":"3310","DOI":"10.1523\/JNEUROSCI.20-09-03310.2000","volume":"20","author":"Z Kourtzi","year":"2000","unstructured":"Kourtzi, Z. & Kanwisher, N. Cortical regions involved in perceiving object shape. J. Neurosci. 20, 3310\u20133318 (2000).","journal-title":"J. Neurosci."},{"key":"826_CR44","doi-asserted-by":"publisher","first-page":"4302","DOI":"10.1523\/JNEUROSCI.17-11-04302.1997","volume":"17","author":"N Kanwisher","year":"1997","unstructured":"Kanwisher, N., McDermott, J. & Chun, M. M. The fusiform face area: a module in human extrastriate cortex specialized for face perception. J. Neurosci. 17, 4302\u20134311 (1997).","journal-title":"J. Neurosci."},{"key":"826_CR45","doi-asserted-by":"publisher","first-page":"598","DOI":"10.1038\/33402","volume":"392","author":"R Epstein","year":"1998","unstructured":"Epstein, R. & Kanwisher, N. A cortical representation of the local visual environment. Nature 392, 598\u2013601 (1998).","journal-title":"Nature"},{"key":"826_CR46","doi-asserted-by":"publisher","first-page":"e0223792","DOI":"10.1371\/journal.pone.0223792","volume":"14","author":"MN Hebart","year":"2019","unstructured":"Hebart, M. N. et al. Things: a database of 1,854 object concepts and more than 26,000 naturalistic object images. PLoS ONE 14, e0223792 (2019).","journal-title":"PLoS ONE"},{"key":"826_CR47","doi-asserted-by":"crossref","unstructured":"Lin, T.-Y. et al. Microsoft COCO: common objects in context. In European Conference on Computer Vision 740\u2013755 (Springer, 2014).","DOI":"10.1007\/978-3-319-10602-1_48"},{"key":"826_CR48","doi-asserted-by":"publisher","first-page":"171","DOI":"10.1038\/nature18933","volume":"536","author":"MF Glasser","year":"2016","unstructured":"Glasser, M. F. et al. A multi-modal parcellation of human cerebral cortex. Nature 536, 171\u2013178 (2016).","journal-title":"Nature"},{"key":"826_CR49","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 770\u2013778 (IEEE, 2016).","DOI":"10.1109\/CVPR.2016.90"},{"key":"826_CR50","unstructured":"Beyer, L., Zhai, X. & Kolesnikov, A. Better plain ViT baselines for ImageNet-1k. Preprint at https:\/\/arxiv.org\/abs\/2205.01580 (2022)."},{"key":"826_CR51","doi-asserted-by":"crossref","unstructured":"Jia, Y. et al. Caffe: convolutional architecture for fast feature embedding. In Proc. 22nd ACM International Conference on Multimedia 675\u2013678 (ACM, 2014).","DOI":"10.1145\/2647868.2654889"},{"key":"826_CR52","unstructured":"Paszke, A. et al. PyTorch: an imperative style, high-performance deep learning library. In 3rd Conference on Neural Information Processing Systems (NeurIPS 2019) https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2019\/file\/bdbca288fee7f92f2bfa9f7012727740-Paper.pdf (NeurIPS, 2019)."},{"key":"826_CR53","doi-asserted-by":"publisher","first-page":"178","DOI":"10.1016\/j.neuron.2018.11.004","volume":"101","author":"MD Lescroart","year":"2019","unstructured":"Lescroart, M. D. & Gallant, J. L. Human scene-selective areas represent 3D configurations of surfaces. Neuron 101, 178\u2013192 (2019).","journal-title":"Neuron"},{"key":"826_CR54","doi-asserted-by":"publisher","first-page":"91","DOI":"10.1088\/0954-898X_15_2_002","volume":"15","author":"A Hsu","year":"2004","unstructured":"Hsu, A., Borst, A. & Theunissen, F. E. Quantifying variability in neural responses and its application for the validation of model predictions. Netw. Comput. Neural Syst. 15, 91\u2013109 (2004).","journal-title":"Netw. Comput. Neural Syst."},{"key":"826_CR55","unstructured":"Muraki, Y. Improving visual image reconstruction from brain activity using texture and structure similarity losses. Master\u2019s thesis, Kyoto University (2024)."},{"key":"826_CR56","unstructured":"Nguyen, A., Dosovitskiy, A., Yosinski, J., Brox, T. & Clune, J. Synthesizing the preferred inputs for neurons in neural networks via deep generator networks. In 30th Conference on Neural Information Processing Systems (NIPS 2016) https:\/\/scispace.com\/pdf\/synthesizing-the-preferred-inputs-for-neurons-in-neural-lmry4l8orq.pdf (2016)."},{"key":"826_CR57","doi-asserted-by":"publisher","unstructured":"Ho, J. K., Horikawa, T., Majima, K. & Kamitani, Y. Inter-individual deep image reconstruction. fighare https:\/\/doi.org\/10.6084\/m9.figshare.17985578.v1 (2023).","DOI":"10.6084\/m9.figshare.17985578.v1"},{"key":"826_CR58","doi-asserted-by":"publisher","unstructured":"Hebart, M. N. et al. Things-fMRI. OpenNeuro https:\/\/doi.org\/10.18112\/openneuro.ds004192.v1.0.5 (2022).","DOI":"10.18112\/openneuro.ds004192.v1.0.5"},{"key":"826_CR59","doi-asserted-by":"publisher","unstructured":"Park, J.-Y., Tsukamoto, M., Tanaka, M. & Kamitani, Y. Sound reconstruction from human brain activity via a generative model with brain-like auditory features. figshare https:\/\/doi.org\/10.6084\/m9.figshare.23633751.v9 (2023).","DOI":"10.6084\/m9.figshare.23633751.v9"},{"key":"826_CR60","doi-asserted-by":"publisher","unstructured":"Wang, H. & Aoki, S. Inter-individual and inter-site neural code conversion without shared stimuli. Zenodo https:\/\/doi.org\/10.5281\/zenodo.14910040 (2025).","DOI":"10.5281\/zenodo.14910040"},{"key":"826_CR61","unstructured":"KamitaniLab. Inter-site image reconstruction for Deeprecon test data, Video1. YouTube (25 June 2024); https:\/\/www.youtube.com\/watch?v=i8-1niS2c8s"},{"key":"826_CR62","unstructured":"KamitaniLab. Inter-site image reconstruction for THINGS test data, Video2. YouTube (25 June 2024); https:\/\/www.youtube.com\/watch?v=yjPzEomHFcc"},{"key":"826_CR63","unstructured":"KamitaniLab. Inter-site image reconstruction for NSD test data, Video3. YouTube (25 June 2024); https:\/\/www.youtube.com\/watch?v=isN6hQ_VcNs"}],"container-title":["Nature Computational Science"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.nature.com\/articles\/s43588-025-00826-5.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/www.nature.com\/articles\/s43588-025-00826-5","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/www.nature.com\/articles\/s43588-025-00826-5.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,9,7]],"date-time":"2025-09-07T01:20:51Z","timestamp":1757208051000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.nature.com\/articles\/s43588-025-00826-5"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,7,11]]},"references-count":63,"journal-issue":{"issue":"7","published-online":{"date-parts":[[2025,7]]}},"alternative-id":["826"],"URL":"https:\/\/doi.org\/10.1038\/s43588-025-00826-5","relation":{},"ISSN":["2662-8457"],"issn-type":[{"value":"2662-8457","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,7,11]]},"assertion":[{"value":"9 August 2024","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"27 May 2025","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"11 July 2025","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"Advanced Telecommunications Research Institute International (ATR) and Honda Motor Company hold a patent (no. US9020586B2), which covers the foundational concept of neural code conversion; Y.K. is one of the inventors of this patent.","order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}]}}