Up until today, each job run on PiCloud has been limited to using only a single core. For those familiar with the Python global interpreter lock (GIL), this may not seem like a big deal at first. But as our users have let us know, the limitation is acute for the following reasons:
- Many performance-focused Python libraries including numpy release the GIL whenever possible, which means that even Python programs can leverage multiple cores.
- With the release of Environments, many of our users are running non-Python multithreaded programs. Some of those can use as many cores as we can throw at it.
- The most RAM a single core has is 8GB (m1). Up until now, a single job couldn’t break this limit. But now, you can pool multiple cores together to get access to more RAM.
How do I use it?
All you have to do is use the
_cores keyword argument.
# uses 4 cores job_id = cloud.call(func, _type='f2', _cores=4) # works the same for map jobs job_ids = cloud.map(func, datapoints, _type='f2', _cores=4)
Each job gets 4 f2 cores of processing power, and 14.8GB (4 cores x 3.7GB per f2 core) of RAM. We use the f2 core because as the next section shows, the c1 core, which is default, does not support the new multicore feature.
How many cores per job?
The number depends on the type of core you select.
|Core Type||Supported Multiples|
|c2||1, 2, 4 or 8 cores|
|f2||1, 2, 4, 8, or 16 cores|
|m1||1 or 2 cores|
Per our pricing page, a job using a single f2 core would cost $0.22/hour. A job using two f2 cores would cost $0.44/hour. In other words, the cost per core has stayed the same, and there are no additional fees. You’re still charged by the millisecond.
Multicore not enough for you? Let us know by leaving a comment.
Categories: What's NewYou can follow any responses to this entry through the RSS 2.0 feed.