Virtual Machine Types


Would you please add more powerful VMs available to the individual plan? I’ve run into memory and compute time issues with gcp-large in the Kaggle TalkingData competition. For example, n1-standard-16 ( would be minimally sufficient for feature engineering on that full training dataset. Having more cores than the 8 in gcp-large would be very helpful in LightGBM training. The TalkingData competition is over today, but for the future, please…


Hi Jonathan!

We’ll add an option to use more powerful machines - I cannot give a fixed date right now, but we’ll keep this in mind for nearing releases. In your opinion, what would be a good choice - 16, 32 or 64 cores? Is the RAM associated with n1-standard-* machines appropriate or would you prefer more/less?


Thanks Piotr! I am not an expert in the hardware specs, and “more” always seems like the answer… :slight_smile: However, we can take some clues: in this post, The #6 gold medal finisher indicates his setup:

I mostly used a 20 core Xeon at 2.3 MHz with 64 GB RAM and 64GB swap. That machine is a bit slow, but it scales when using 20 threads. I also used another machine with a 4 core i7 and 2 GPU to run Keras

Perhaps then a good choice for a “huge” machine is n1-standard-32 which is 32 cores and 120GB memory. This is approx $1.50/hr.

Is this “just” a configuration addition? If so, then maybe add a few different sizes, including a K80 with 4 GPUs?


Thanks for the link - very useful info. We’ll have an internal discussion on priorities, but you can certainly expect more CPU/GPU-packed machines to appear in Neptune.


Hi Jonathan, it took us a while, but we have just released much more machine options. Pls check it out and let me know what you think in particular 64 CPUs, 208.00 GB RAM, 4x GPU (P100, 64.00 GB RAM).


Great! Congratulations on this new feature. Can you give some transparency on how you come to the pricing? Is this just a pass through of GCP pricing or do you mark it up in a standard way?


Again, good point - we should mention it on the pricing page (we will). We match 1-to-1 GCP’s pricing -> we don’t add any commission. We believe that hardware for deep learning cost way too much, and if platform is about allowing data scientists do more, our pricing model cannot be based on hardware commission.


Wonderful to hear! Thank you for doing that!!