remoteipy

remote python execution for SolveIt

If you are not using SolveIt then this is probably not as relevant to you.

Usage

Connecting

from remoteipy import connect, is_connected, disconnect
client = connect("ssh -p 16067 root@ssh3.vast.ai -L 8080:localhost:8080", py="/venv/main/bin/python")
> Connected to IPython kernel at root@ssh3.vast.ai:16067
Welcome to vast.ai. If authentication fails, try again after a few seconds, and double check your ssh key.
Have fun!
is_connected(client)
> True

Running code

%%remote

import sys
print(sys.version)
> 3.10.12 (main, Feb  4 2025, 14:57:36) [GCC 11.4.0]
%%remote

!nvidia-smi
> Wed Dec 10 10:05:54 2025       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 570.172.08             Driver Version: 570.172.08     CUDA Version: 12.8     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA H100 80GB HBM3          On  |   00000000:05:00.0 Off |                  Off |
| N/A   29C    P0             76W /  700W |       0MiB /  81559MiB |      0%      Default |
|                                         |                        |             Disabled |
+-----------------------------------------+------------------------+----------------------+
                                                                                         
+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI              PID   Type   Process name                        GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|  No running processes found                                                             |
+-----------------------------------------------------------------------------------------+

Asking SolveIt

What GPU am I running? Would you call me GPU-rich or GPU-poor ;)?

You’re running an NVIDIA H100 80GB HBM3 — that’s the flagship datacenter GPU from NVIDIA’s Hopper generation!

I’d definitely call you GPU-rich 🤑 — that’s one of the most powerful GPUs available for AI/ML workloads. The H100 80GB is what the big labs use for training large models. You’ve got:

  • 80GB of ultra-fast HBM3 memory
  • Up to 700W of compute power at your fingertips
  • CUDA 12.8 support

You’re sitting pretty! What are you planning to run on this beast?

Running more code

%%remote

import torch

A = torch.ones(4096, 4096, dtype=torch.float32, device="cuda")
A
> tensor([[1., 1., 1.,  ..., 1., 1., 1.],
        [1., 1., 1.,  ..., 1., 1., 1.],
        [1., 1., 1.,  ..., 1., 1., 1.],
        ...,
        [1., 1., 1.,  ..., 1., 1., 1.],
        [1., 1., 1.,  ..., 1., 1., 1.],
        [1., 1., 1.,  ..., 1., 1., 1.]], device='cuda:0')

What did I just do?

You just created a 4096×4096 matrix filled with ones on your GPU (the H100)!

Specifically: - torch.ones() creates a tensor (PyTorch’s version of an array) filled with 1s - dtype=torch.float32 means each element is a 32-bit floating point number - device=“cuda” puts it on your GPU memory instead of CPU RAM This tensor is taking up about 64 MB of your GPU’s 80GB memory (4096 × 4096 × 4 bytes per float32).

Are you planning to do some matrix operations or neural network work with this?You just created a 4096×4096 matrix filled with ones on your GPU! Specifically, you:

Created a PyTorch tensor with 32-bit floating point values Allocated it directly on the CUDA device (your H100 GPU) The tensor contains about 16.8 million elements (all set to 1.0) This takes up about 64 MB of your GPU’s 80 GB memory. Are you planning to do some matrix operations or neural network work with it?

Disconnecting

disconnect(client)
> Exit requrest sent.
is_connected(client)
> False

Installation

Install latest from the GitHub repository:

$ pip install git+https://github.com/achalpandeyy/remoteipy.git
$ pip install remoteipy

Documentation

Documentation can be found hosted on this GitHub repository’s pages. Additionally you can find package manager specific guidelines on pypi.