-
Notifications
You must be signed in to change notification settings - Fork 35
Description
Is your feature request related to a problem?
Yes. I’m trying to use the Google Colab VS Code extension behind a corporate proxy/firewall. Colab in the browser works fine on the same machine/network, and I can reach the *.prod.colab.dev runtime URLs directly in a browser (I get a 404, not a network block), so outbound HTTPS is allowed.
In VS Code, the Colab extension successfully creates a runtime (“Colab CPU”), but the Jupyter kernel never transitions past executionState: "starting" and always shows connections: 0. Shortly afterwards, the extension decides shouldKeepAlive = false and tears down the runtime. This repeats in a loop.
This suggests that the extension (or its Jupyter integration) is unable to establish the kernel websocket/stream behind the proxy, even though normal HTTPS to the same host works.
Environment (example):
- VS Code: 1.106.1 (desktop, Windows)
- Jupyter extension: 2025.9.1
- Colab extension: latest from Marketplace
- Corporate proxy configured at OS level; VS Code either inherits it or uses
http.proxy/http.proxySupport = "on"
Describe the solution you'd like to see
I’d like the Colab extension to fully support running behind a corporate proxy/firewall, in the same way that VS Code’s Jupyter support for other remote kernels does. Concretely:
-
Ensure the extension respects VS Code’s proxy settings:
http.proxyhttp.proxySupporthttp.noProxy- system proxy / certificates (
http.systemCertificates)
-
Use the same networking stack / proxy handling as the core Jupyter extension for connecting to remote kernels, including websockets.
-
Optionally provide:
- A clear error message when the kernel connection fails due to proxy/websocket issues, instead of silently keeping the kernel in
"starting"and then shutting it down. - A small diagnostics command (e.g. “Colab: Network Diagnostics”) that tests connectivity to the assigned
*.prod.colab.devhost using the same mechanism as the kernel connection, and reports proxy-related failures.
- A clear error message when the kernel connection fails due to proxy/websocket issues, instead of silently keeping the kernel in
The ideal UX: from a proxied corporate network, if I can open Colab in a browser and reach the *.prod.colab.dev host, then the VS Code Colab extension should also be able to connect to that runtime and attach a Jupyter kernel.
Describe alternatives you've considered
- Using Colab exclusively in the browser, and VS Code only for local editing (no Colab integration). This works but loses the main benefit of the extension (single integrated environment).
- Running VS Code on a remote machine (e.g. via SSH/VS Code Server) that is not behind the corporate proxy. This is a workaround, but significantly complicates the workflow.
All of these are workarounds rather than a solution; they avoid the proxy problem instead of letting the extension work in a typical enterprise environment.
Additional context
When I reproduce the issue with logging enabled, I see:
-
A Colab server is successfully created:
{ "id": "d0650082-6999-43c3-8b8b-e874341be2e4", "label": "Colab CPU", "endpoint": "m-s-2mi6nxdrwgz5t", "connectionInformation": { "baseUrl": { "scheme": "https", "authority": "8080-m-s-2mi6nxdrwgz5t-c.us-central1-1.prod.colab.dev", "path": "/" }, "token": "REDACTED", "tokenExpiry": "2025-11-21T08:40:36.670Z", "headers": { "X-Colab-Runtime-Proxy-Token": "REDACTED", "X-Colab-Client-Agent": "vscode" } } } -
The kernel list shows:
[ { "id": "b1ce698b-7b62-416a-a4c2-8c498dc6c57d", "name": "python3", "connections": 0, "lastActivity": "2025-11-21T07:40:49.541648Z", "executionState": "starting" } ] -
Shortly after, the extension logs:
shouldKeepAlive ... -> falseand the runtime is shut down.
This repeats whenever I try to connect. From the same machine, opening the https://8080-m-s-...prod.colab.dev/ URL in a browser works (404, not blocked), and Colab web notebooks run fine, so the network itself is not fully blocking the domain.
Supporting the extension in proxy/firewalled corporate environments would make it much easier to adopt in typical enterprise setups where direct unrestricted outbound traffic is not possible.