Surf Controller is a powerful CLI tool for managing your cloud workspaces with ease! It provides a user-friendly interface to monitor, pause, and resume your virtual machines (VMs) effortlessly.
- ๐ Interactive CLI interface
- ๐ Real-time VM status updates
- โธ๏ธ Pause and
โถ๏ธ resume VMs with a single keystroke - ๐ Multi-select functionality for batch operations
- ๐ Live log viewing
By default, your ~/.surf_controller/ dir will be used for data.
You can modify this by specifying SURF_CONTROLLER_CONFIG_DIR in your env, eg
export SURF_CONTROLLER_CONFIG_DIR=/srv/shared/You can do this for the server for all users with
sudo vim /etc/profile.d/shared_env.shUse uv to install surf-controller globally or add it to your local environment. You can install it with curl -LsSf https://astral.sh/uv/install.sh | sh . Of course, you can also use pip to install it (but then you might also need to install python etc, something uv also takes care of).
Here are three different commands you could use, the first one (uv tools) is recommended to install it globally.
Dont run all three, pick one.
install globally
uv tools install surf-controller
install in a .venv
uv add surf-controller
Install with pip if you get nervous from new tools
pip install surf-controller
All three commands install the surfcontroller command; if you dont know what to pick, use uv
You can find the SURF API documentation here: API Documentation
On your profile you can create your own API token.
You can obtain the CSRF tokens by authorizing directly with the Surf API. Use the green lock icon in the top right corner to authorize with your API-token and obtain the CSRF token by executing a request.
Copy both tokens and use them during configuration.
Run the configuration by starting the controller:
surfcontroller
On first run, Surf Controller will:
- ๐ Create a configuration directory in your SURF_CONTROLLER_CONFIG_DIR folder (defaults to ~)
- ๐ Copy a default configuration file into your configdir
- ๐ Prompt you for API and CSRF tokens and stores them in your configdir
Run the controller:
surfcontroller
j: Move cursor downk: Move cursor upJ: Next pageK: Previous pageEnter: Select/deselect VMa: toggle Select all VMsf: toggle Filter VMs (by username)R: toggle Filter Running VMs (show only running)1-9: Toggle custom filters+: Add custom filtern: rename usernamel: toggle view logs
p: Pause selected VMsr: Resume selected VMse: Batch update End Date for selected VMsE: Toggle Pause Exclusion (Shift+e)c: Bulk Create VMs (Wizard)d: Bulk Delete selected VMs (with confirmation)u: Update VM lists: ssh into selected VM (select just one VM)
It is a bit hacky, but i follow these steps:
- Go to your SRC workspaces dashboard
- Manually, create a new VM with the settings (wallet, colab, type, etc) you want to use as a template
- Get the id of the VM from your dashboard.
- Go to the swagger docs and run the
GET /v1/workspace/workspaces/{id}/request with the id of your VM. - You can have a look at the tempaltes/example.json i provided. You should replace all the tags that have
<...>with your own values. - Keep the three values with
placeholder-..., surfcontroller will replace these when creating VMs.
I created three templates for different colabs: ubuntu-8GBRAM, ubuntu-16GBRAM and ubuntu-GPU. I am planning on improving the templates in the future, to make them more flexible.
- Bulk Creation Wizard: Press
cto launch a step-by-step wizard for creating multiple VMs from a user list and template. - Bulk Deletion: Press
dto delete multiple VMs at once. Includes a safety confirmation dialog. - Batch End Date Update: Press
eto update the expiration date for multiple VMs simultaneously. - Running Filter: Press
Rto quickly see only your running VMs. - UI Improvements:
- Progress Bars: Visual progress tracking for batch operations.
- Status Indicators: Clear "OK" (Green) or "FAILED" (Red) status for actions.
- Responsive Footer: Command bar adapts to screen width.
- Persistent Filters: Custom filters are saved between sessions.
Note that adding to the exclusion list only adds the id of the vm to exclusions.json. The actual shutting down of the VM is done from a VM we control on the Surf cloud, so toggling this on your computer doesnt impact the actual pausing at 21:00.
Edit ~/.surf_controller/config.toml to customize your settings.
Contributions are welcome!
This project is licensed under the MIT License - see the LICENSE file for details.
- Thanks to https://claude.ai/ for help with the curses implementation
The project includes a Docker-based scheduler for automating nightly VM pauses and managing exclusions via a Web GUI.
- Navigate to the
scheduler/directory:cd scheduler - Create a
.envfile from the sample:Editcp .env.sample .env
.envto set your desiredWEB_USERNAMEandWEB_PASSWORD. - Start the scheduler:
docker-compose up -d --build
- Web GUI: Accessible at
http://localhost:5001. Use it to view VM status, toggle exclusions, and manually trigger the pause job. - Nightly Job: A cron job runs every night at 21:00 to pause all non-excluded VMs.
- Configuration: The scheduler mounts your local
~/.surf_controllerdirectory, so it shares the same tokens and exclusions as the CLI tool. - Installation: The Docker image installs the
surf-controllerpackage directly from the source code in the parent directory, ensuring it always runs the latest version of your code.
To deploy the scheduler to a remote machine, you can use the automated deployment script:
python3 scheduler/deploy.pyThis script will:
- Configure Environment: Check/create
scheduler/.env, generate a secureWEB_PASSWORDif needed, and prompt for theDEPLOY_HOST. - Build & Push: Build the Docker image for
linux/amd64and push it to Docker Hub. - Deploy Files: SCP the
.envanddocker-compose.deploy.ymlto the remote server. - Restart Service: SSH into the remote server and restart the Docker service.
If you open port 5001, you will have a GUI dashboard to exclude VMs from pausing. so: http://123.45.67.89:5001 where 123.45.67.89 is the IP of your remote server.
Prerequisites:
- SSH access to the remote host (default
rgrouls@123.45.67.89) via key-based authentication. - Docker installed and running locally.